00:00:00.000 Started by upstream project "autotest-per-patch" build number 127088 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.073 The recommended git tool is: git 00:00:00.073 using credential 00000000-0000-0000-0000-000000000002 00:00:00.075 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.113 Fetching changes from the remote Git repository 00:00:00.115 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.174 Using shallow fetch with depth 1 00:00:00.174 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.174 > git --version # timeout=10 00:00:00.213 > git --version # 'git version 2.39.2' 00:00:00.213 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.237 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.237 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.026 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.035 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.046 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:06.046 > git config core.sparsecheckout # timeout=10 00:00:06.054 > git read-tree -mu HEAD # timeout=10 00:00:06.067 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:06.129 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:06.129 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.211 [Pipeline] Start of Pipeline 00:00:06.224 [Pipeline] library 00:00:06.226 Loading library shm_lib@master 00:00:06.226 Library shm_lib@master is cached. Copying from home. 00:00:06.240 [Pipeline] node 00:00:06.249 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.250 [Pipeline] { 00:00:06.259 [Pipeline] catchError 00:00:06.260 [Pipeline] { 00:00:06.277 [Pipeline] wrap 00:00:06.285 [Pipeline] { 00:00:06.290 [Pipeline] stage 00:00:06.291 [Pipeline] { (Prologue) 00:00:06.439 [Pipeline] sh 00:00:06.718 + logger -p user.info -t JENKINS-CI 00:00:06.735 [Pipeline] echo 00:00:06.736 Node: GP11 00:00:06.743 [Pipeline] sh 00:00:07.037 [Pipeline] setCustomBuildProperty 00:00:07.049 [Pipeline] echo 00:00:07.051 Cleanup processes 00:00:07.056 [Pipeline] sh 00:00:07.340 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.340 961617 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.351 [Pipeline] sh 00:00:07.629 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.629 ++ grep -v 'sudo pgrep' 00:00:07.629 ++ awk '{print $1}' 00:00:07.629 + sudo kill -9 00:00:07.629 + true 00:00:07.642 [Pipeline] cleanWs 00:00:07.651 [WS-CLEANUP] Deleting project workspace... 00:00:07.651 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.657 [WS-CLEANUP] done 00:00:07.659 [Pipeline] setCustomBuildProperty 00:00:07.667 [Pipeline] sh 00:00:07.943 + sudo git config --global --replace-all safe.directory '*' 00:00:08.019 [Pipeline] httpRequest 00:00:08.034 [Pipeline] echo 00:00:08.035 Sorcerer 10.211.164.101 is alive 00:00:08.043 [Pipeline] httpRequest 00:00:08.047 HttpMethod: GET 00:00:08.048 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.048 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.062 Response Code: HTTP/1.1 200 OK 00:00:08.062 Success: Status code 200 is in the accepted range: 200,404 00:00:08.063 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:12.244 [Pipeline] sh 00:00:12.521 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:12.537 [Pipeline] httpRequest 00:00:12.563 [Pipeline] echo 00:00:12.565 Sorcerer 10.211.164.101 is alive 00:00:12.573 [Pipeline] httpRequest 00:00:12.578 HttpMethod: GET 00:00:12.579 URL: http://10.211.164.101/packages/spdk_29c5e1f47bd1554a291933c7004262dcb1dda178.tar.gz 00:00:12.579 Sending request to url: http://10.211.164.101/packages/spdk_29c5e1f47bd1554a291933c7004262dcb1dda178.tar.gz 00:00:12.584 Response Code: HTTP/1.1 200 OK 00:00:12.585 Success: Status code 200 is in the accepted range: 200,404 00:00:12.585 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_29c5e1f47bd1554a291933c7004262dcb1dda178.tar.gz 00:01:05.204 [Pipeline] sh 00:01:05.488 + tar --no-same-owner -xf spdk_29c5e1f47bd1554a291933c7004262dcb1dda178.tar.gz 00:01:08.784 [Pipeline] sh 00:01:09.065 + git -C spdk log --oneline -n5 00:01:09.065 29c5e1f47 nvmf/tcp: Add support for the interrupt mode in NVMe-of TCP 00:01:09.065 0bb5c21e2 nvmf: move register nvmf_poll_group_poll interrupt to nvmf 00:01:09.065 8968f30fe nvmf/tcp: replace pending_buf_queue with nvmf_tcp_request_get_buffers 00:01:09.065 13040d616 nvmf: enable iobuf based queuing for nvmf requests 00:01:09.065 5c0b15eed nvmf/tcp: fix spdk_nvmf_tcp_control_msg_list queuing 00:01:09.076 [Pipeline] } 00:01:09.091 [Pipeline] // stage 00:01:09.099 [Pipeline] stage 00:01:09.100 [Pipeline] { (Prepare) 00:01:09.116 [Pipeline] writeFile 00:01:09.132 [Pipeline] sh 00:01:09.412 + logger -p user.info -t JENKINS-CI 00:01:09.426 [Pipeline] sh 00:01:09.709 + logger -p user.info -t JENKINS-CI 00:01:09.720 [Pipeline] sh 00:01:10.018 + cat autorun-spdk.conf 00:01:10.018 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.018 SPDK_TEST_NVMF=1 00:01:10.018 SPDK_TEST_NVME_CLI=1 00:01:10.018 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.018 SPDK_TEST_NVMF_NICS=e810 00:01:10.018 SPDK_TEST_VFIOUSER=1 00:01:10.018 SPDK_RUN_UBSAN=1 00:01:10.018 NET_TYPE=phy 00:01:10.025 RUN_NIGHTLY=0 00:01:10.032 [Pipeline] readFile 00:01:10.060 [Pipeline] withEnv 00:01:10.062 [Pipeline] { 00:01:10.076 [Pipeline] sh 00:01:10.360 + set -ex 00:01:10.360 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:10.360 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:10.360 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:10.360 ++ SPDK_TEST_NVMF=1 00:01:10.360 ++ SPDK_TEST_NVME_CLI=1 00:01:10.360 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:10.360 ++ SPDK_TEST_NVMF_NICS=e810 00:01:10.360 ++ SPDK_TEST_VFIOUSER=1 00:01:10.360 ++ SPDK_RUN_UBSAN=1 00:01:10.360 ++ NET_TYPE=phy 00:01:10.360 ++ RUN_NIGHTLY=0 00:01:10.360 + case $SPDK_TEST_NVMF_NICS in 00:01:10.360 + DRIVERS=ice 00:01:10.360 + [[ tcp == \r\d\m\a ]] 00:01:10.360 + [[ -n ice ]] 00:01:10.360 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:10.360 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:14.551 rmmod: ERROR: Module irdma is not currently loaded 00:01:14.551 rmmod: ERROR: Module i40iw is not currently loaded 00:01:14.551 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:14.551 + true 00:01:14.551 + for D in $DRIVERS 00:01:14.551 + sudo modprobe ice 00:01:14.551 + exit 0 00:01:14.559 [Pipeline] } 00:01:14.575 [Pipeline] // withEnv 00:01:14.580 [Pipeline] } 00:01:14.595 [Pipeline] // stage 00:01:14.603 [Pipeline] catchError 00:01:14.605 [Pipeline] { 00:01:14.616 [Pipeline] timeout 00:01:14.616 Timeout set to expire in 50 min 00:01:14.617 [Pipeline] { 00:01:14.628 [Pipeline] stage 00:01:14.630 [Pipeline] { (Tests) 00:01:14.644 [Pipeline] sh 00:01:14.924 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:14.924 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:14.924 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:14.924 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:14.924 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.924 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:14.924 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:14.924 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:14.924 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:14.924 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:14.924 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:14.924 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:14.924 + source /etc/os-release 00:01:14.924 ++ NAME='Fedora Linux' 00:01:14.924 ++ VERSION='38 (Cloud Edition)' 00:01:14.924 ++ ID=fedora 00:01:14.924 ++ VERSION_ID=38 00:01:14.924 ++ VERSION_CODENAME= 00:01:14.924 ++ PLATFORM_ID=platform:f38 00:01:14.924 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:14.924 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:14.924 ++ LOGO=fedora-logo-icon 00:01:14.924 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:14.924 ++ HOME_URL=https://fedoraproject.org/ 00:01:14.924 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:14.924 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:14.924 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:14.924 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:14.924 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:14.924 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:14.924 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:14.924 ++ SUPPORT_END=2024-05-14 00:01:14.924 ++ VARIANT='Cloud Edition' 00:01:14.924 ++ VARIANT_ID=cloud 00:01:14.924 + uname -a 00:01:14.924 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:14.924 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:15.858 Hugepages 00:01:15.858 node hugesize free / total 00:01:15.858 node0 1048576kB 0 / 0 00:01:15.858 node0 2048kB 0 / 0 00:01:15.858 node1 1048576kB 0 / 0 00:01:15.858 node1 2048kB 0 / 0 00:01:15.858 00:01:15.858 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:15.858 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:15.858 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:15.858 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:15.858 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:15.858 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:15.858 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:15.858 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:15.858 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:15.858 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:15.858 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:15.858 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:15.858 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:15.858 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:15.858 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:15.858 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:15.858 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:15.858 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:15.858 + rm -f /tmp/spdk-ld-path 00:01:15.858 + source autorun-spdk.conf 00:01:15.858 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.858 ++ SPDK_TEST_NVMF=1 00:01:15.858 ++ SPDK_TEST_NVME_CLI=1 00:01:15.858 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.858 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.858 ++ SPDK_TEST_VFIOUSER=1 00:01:15.858 ++ SPDK_RUN_UBSAN=1 00:01:15.858 ++ NET_TYPE=phy 00:01:15.858 ++ RUN_NIGHTLY=0 00:01:15.858 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:15.858 + [[ -n '' ]] 00:01:15.858 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.858 + for M in /var/spdk/build-*-manifest.txt 00:01:15.858 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:15.858 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.858 + for M in /var/spdk/build-*-manifest.txt 00:01:15.858 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:15.858 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:16.116 ++ uname 00:01:16.116 + [[ Linux == \L\i\n\u\x ]] 00:01:16.116 + sudo dmesg -T 00:01:16.116 + sudo dmesg --clear 00:01:16.116 + dmesg_pid=962921 00:01:16.116 + [[ Fedora Linux == FreeBSD ]] 00:01:16.116 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.116 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.116 + sudo dmesg -Tw 00:01:16.116 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:16.116 + [[ -x /usr/src/fio-static/fio ]] 00:01:16.116 + export FIO_BIN=/usr/src/fio-static/fio 00:01:16.116 + FIO_BIN=/usr/src/fio-static/fio 00:01:16.116 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:16.116 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:16.116 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:16.116 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.116 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.116 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:16.116 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.116 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.116 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:16.116 Test configuration: 00:01:16.116 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.116 SPDK_TEST_NVMF=1 00:01:16.116 SPDK_TEST_NVME_CLI=1 00:01:16.116 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.116 SPDK_TEST_NVMF_NICS=e810 00:01:16.116 SPDK_TEST_VFIOUSER=1 00:01:16.116 SPDK_RUN_UBSAN=1 00:01:16.116 NET_TYPE=phy 00:01:16.116 RUN_NIGHTLY=0 19:30:33 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:16.116 19:30:33 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:16.116 19:30:33 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:16.116 19:30:33 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:16.116 19:30:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.116 19:30:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.116 19:30:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.116 19:30:33 -- paths/export.sh@5 -- $ export PATH 00:01:16.116 19:30:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.116 19:30:33 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:16.116 19:30:33 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:16.116 19:30:33 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721842233.XXXXXX 00:01:16.116 19:30:33 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721842233.mNIzDx 00:01:16.116 19:30:33 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:16.117 19:30:33 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:16.117 19:30:33 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:16.117 19:30:33 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:16.117 19:30:33 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:16.117 19:30:33 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:16.117 19:30:33 -- common/autotest_common.sh@399 -- $ xtrace_disable 00:01:16.117 19:30:33 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.117 19:30:33 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:16.117 19:30:33 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:16.117 19:30:33 -- pm/common@17 -- $ local monitor 00:01:16.117 19:30:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.117 19:30:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.117 19:30:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.117 19:30:33 -- pm/common@21 -- $ date +%s 00:01:16.117 19:30:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.117 19:30:33 -- pm/common@21 -- $ date +%s 00:01:16.117 19:30:33 -- pm/common@25 -- $ sleep 1 00:01:16.117 19:30:33 -- pm/common@21 -- $ date +%s 00:01:16.117 19:30:33 -- pm/common@21 -- $ date +%s 00:01:16.117 19:30:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721842233 00:01:16.117 19:30:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721842233 00:01:16.117 19:30:33 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721842233 00:01:16.117 19:30:33 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721842233 00:01:16.117 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721842233_collect-vmstat.pm.log 00:01:16.117 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721842233_collect-cpu-load.pm.log 00:01:16.117 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721842233_collect-cpu-temp.pm.log 00:01:16.117 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721842233_collect-bmc-pm.bmc.pm.log 00:01:17.053 19:30:34 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:17.053 19:30:34 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.053 19:30:34 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.053 19:30:34 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.053 19:30:34 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.053 Wed Jul 24 05:30:34 PM UTC 2024 00:01:17.053 19:30:34 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.053 v24.09-pre-314-g29c5e1f47 00:01:17.053 19:30:34 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:17.053 19:30:34 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:17.053 19:30:34 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:17.053 19:30:34 -- common/autotest_common.sh@1102 -- $ '[' 3 -le 1 ']' 00:01:17.053 19:30:34 -- common/autotest_common.sh@1108 -- $ xtrace_disable 00:01:17.053 19:30:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.053 ************************************ 00:01:17.053 START TEST ubsan 00:01:17.053 ************************************ 00:01:17.053 19:30:34 ubsan -- common/autotest_common.sh@1126 -- $ echo 'using ubsan' 00:01:17.053 using ubsan 00:01:17.053 00:01:17.053 real 0m0.000s 00:01:17.053 user 0m0.000s 00:01:17.053 sys 0m0.000s 00:01:17.053 19:30:34 ubsan -- common/autotest_common.sh@1127 -- $ xtrace_disable 00:01:17.053 19:30:34 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:17.053 ************************************ 00:01:17.053 END TEST ubsan 00:01:17.053 ************************************ 00:01:17.053 19:30:34 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:17.053 19:30:34 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:17.053 19:30:34 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:17.053 19:30:34 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:17.053 19:30:34 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:17.053 19:30:34 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:17.053 19:30:34 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:17.053 19:30:34 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:17.053 19:30:34 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:17.311 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:17.311 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:17.569 Using 'verbs' RDMA provider 00:01:28.103 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:38.073 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:38.073 Creating mk/config.mk...done. 00:01:38.073 Creating mk/cc.flags.mk...done. 00:01:38.073 Type 'make' to build. 00:01:38.073 19:30:54 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:38.073 19:30:54 -- common/autotest_common.sh@1102 -- $ '[' 3 -le 1 ']' 00:01:38.073 19:30:54 -- common/autotest_common.sh@1108 -- $ xtrace_disable 00:01:38.073 19:30:54 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.073 ************************************ 00:01:38.073 START TEST make 00:01:38.073 ************************************ 00:01:38.073 19:30:54 make -- common/autotest_common.sh@1126 -- $ make -j48 00:01:38.073 make[1]: Nothing to be done for 'all'. 00:01:39.460 The Meson build system 00:01:39.460 Version: 1.3.1 00:01:39.460 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:39.460 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:39.460 Build type: native build 00:01:39.460 Project name: libvfio-user 00:01:39.460 Project version: 0.0.1 00:01:39.460 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:39.460 C linker for the host machine: cc ld.bfd 2.39-16 00:01:39.460 Host machine cpu family: x86_64 00:01:39.460 Host machine cpu: x86_64 00:01:39.460 Run-time dependency threads found: YES 00:01:39.460 Library dl found: YES 00:01:39.460 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:39.460 Run-time dependency json-c found: YES 0.17 00:01:39.460 Run-time dependency cmocka found: YES 1.1.7 00:01:39.460 Program pytest-3 found: NO 00:01:39.460 Program flake8 found: NO 00:01:39.460 Program misspell-fixer found: NO 00:01:39.460 Program restructuredtext-lint found: NO 00:01:39.460 Program valgrind found: YES (/usr/bin/valgrind) 00:01:39.460 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:39.460 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:39.460 Compiler for C supports arguments -Wwrite-strings: YES 00:01:39.460 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:39.460 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:39.460 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:39.460 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:39.460 Build targets in project: 8 00:01:39.460 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:39.460 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:39.460 00:01:39.460 libvfio-user 0.0.1 00:01:39.460 00:01:39.460 User defined options 00:01:39.460 buildtype : debug 00:01:39.460 default_library: shared 00:01:39.460 libdir : /usr/local/lib 00:01:39.460 00:01:39.460 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:40.038 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:40.318 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:40.318 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:40.318 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:40.318 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:40.318 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:40.318 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:40.318 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:40.318 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:40.318 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:40.318 [10/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:40.318 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:40.318 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:40.318 [13/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:40.318 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:40.318 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:40.318 [16/37] Compiling C object samples/null.p/null.c.o 00:01:40.594 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:40.594 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:40.594 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:40.594 [20/37] Compiling C object samples/client.p/client.c.o 00:01:40.594 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:40.594 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:40.594 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:40.594 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:40.594 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:40.594 [26/37] Compiling C object samples/server.p/server.c.o 00:01:40.594 [27/37] Linking target samples/client 00:01:40.594 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:40.594 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:40.594 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:40.865 [31/37] Linking target test/unit_tests 00:01:40.865 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:40.865 [33/37] Linking target samples/server 00:01:40.865 [34/37] Linking target samples/lspci 00:01:40.865 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:40.865 [36/37] Linking target samples/null 00:01:40.865 [37/37] Linking target samples/gpio-pci-idio-16 00:01:41.128 INFO: autodetecting backend as ninja 00:01:41.128 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.128 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:41.701 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:41.701 ninja: no work to do. 00:01:46.980 The Meson build system 00:01:46.980 Version: 1.3.1 00:01:46.980 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:46.980 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:46.980 Build type: native build 00:01:46.980 Program cat found: YES (/usr/bin/cat) 00:01:46.980 Project name: DPDK 00:01:46.980 Project version: 24.03.0 00:01:46.980 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:46.980 C linker for the host machine: cc ld.bfd 2.39-16 00:01:46.980 Host machine cpu family: x86_64 00:01:46.980 Host machine cpu: x86_64 00:01:46.980 Message: ## Building in Developer Mode ## 00:01:46.980 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:46.980 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:46.980 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:46.980 Program python3 found: YES (/usr/bin/python3) 00:01:46.980 Program cat found: YES (/usr/bin/cat) 00:01:46.980 Compiler for C supports arguments -march=native: YES 00:01:46.980 Checking for size of "void *" : 8 00:01:46.980 Checking for size of "void *" : 8 (cached) 00:01:46.980 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:46.980 Library m found: YES 00:01:46.980 Library numa found: YES 00:01:46.980 Has header "numaif.h" : YES 00:01:46.980 Library fdt found: NO 00:01:46.980 Library execinfo found: NO 00:01:46.980 Has header "execinfo.h" : YES 00:01:46.980 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:46.980 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:46.980 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:46.980 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:46.980 Run-time dependency openssl found: YES 3.0.9 00:01:46.980 Run-time dependency libpcap found: YES 1.10.4 00:01:46.980 Has header "pcap.h" with dependency libpcap: YES 00:01:46.980 Compiler for C supports arguments -Wcast-qual: YES 00:01:46.981 Compiler for C supports arguments -Wdeprecated: YES 00:01:46.981 Compiler for C supports arguments -Wformat: YES 00:01:46.981 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:46.981 Compiler for C supports arguments -Wformat-security: NO 00:01:46.981 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:46.981 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:46.981 Compiler for C supports arguments -Wnested-externs: YES 00:01:46.981 Compiler for C supports arguments -Wold-style-definition: YES 00:01:46.981 Compiler for C supports arguments -Wpointer-arith: YES 00:01:46.981 Compiler for C supports arguments -Wsign-compare: YES 00:01:46.981 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:46.981 Compiler for C supports arguments -Wundef: YES 00:01:46.981 Compiler for C supports arguments -Wwrite-strings: YES 00:01:46.981 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:46.981 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:46.981 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:46.981 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:46.981 Program objdump found: YES (/usr/bin/objdump) 00:01:46.981 Compiler for C supports arguments -mavx512f: YES 00:01:46.981 Checking if "AVX512 checking" compiles: YES 00:01:46.981 Fetching value of define "__SSE4_2__" : 1 00:01:46.981 Fetching value of define "__AES__" : 1 00:01:46.981 Fetching value of define "__AVX__" : 1 00:01:46.981 Fetching value of define "__AVX2__" : (undefined) 00:01:46.981 Fetching value of define "__AVX512BW__" : (undefined) 00:01:46.981 Fetching value of define "__AVX512CD__" : (undefined) 00:01:46.981 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:46.981 Fetching value of define "__AVX512F__" : (undefined) 00:01:46.981 Fetching value of define "__AVX512VL__" : (undefined) 00:01:46.981 Fetching value of define "__PCLMUL__" : 1 00:01:46.981 Fetching value of define "__RDRND__" : 1 00:01:46.981 Fetching value of define "__RDSEED__" : (undefined) 00:01:46.981 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:46.981 Fetching value of define "__znver1__" : (undefined) 00:01:46.981 Fetching value of define "__znver2__" : (undefined) 00:01:46.981 Fetching value of define "__znver3__" : (undefined) 00:01:46.981 Fetching value of define "__znver4__" : (undefined) 00:01:46.981 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:46.981 Message: lib/log: Defining dependency "log" 00:01:46.981 Message: lib/kvargs: Defining dependency "kvargs" 00:01:46.981 Message: lib/telemetry: Defining dependency "telemetry" 00:01:46.981 Checking for function "getentropy" : NO 00:01:46.981 Message: lib/eal: Defining dependency "eal" 00:01:46.981 Message: lib/ring: Defining dependency "ring" 00:01:46.981 Message: lib/rcu: Defining dependency "rcu" 00:01:46.981 Message: lib/mempool: Defining dependency "mempool" 00:01:46.981 Message: lib/mbuf: Defining dependency "mbuf" 00:01:46.981 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:46.981 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:46.981 Compiler for C supports arguments -mpclmul: YES 00:01:46.981 Compiler for C supports arguments -maes: YES 00:01:46.981 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:46.981 Compiler for C supports arguments -mavx512bw: YES 00:01:46.981 Compiler for C supports arguments -mavx512dq: YES 00:01:46.981 Compiler for C supports arguments -mavx512vl: YES 00:01:46.981 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:46.981 Compiler for C supports arguments -mavx2: YES 00:01:46.981 Compiler for C supports arguments -mavx: YES 00:01:46.981 Message: lib/net: Defining dependency "net" 00:01:46.981 Message: lib/meter: Defining dependency "meter" 00:01:46.981 Message: lib/ethdev: Defining dependency "ethdev" 00:01:46.981 Message: lib/pci: Defining dependency "pci" 00:01:46.981 Message: lib/cmdline: Defining dependency "cmdline" 00:01:46.981 Message: lib/hash: Defining dependency "hash" 00:01:46.981 Message: lib/timer: Defining dependency "timer" 00:01:46.981 Message: lib/compressdev: Defining dependency "compressdev" 00:01:46.981 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:46.981 Message: lib/dmadev: Defining dependency "dmadev" 00:01:46.981 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:46.981 Message: lib/power: Defining dependency "power" 00:01:46.981 Message: lib/reorder: Defining dependency "reorder" 00:01:46.981 Message: lib/security: Defining dependency "security" 00:01:46.981 Has header "linux/userfaultfd.h" : YES 00:01:46.981 Has header "linux/vduse.h" : YES 00:01:46.981 Message: lib/vhost: Defining dependency "vhost" 00:01:46.981 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:46.981 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:46.981 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:46.981 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:46.981 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:46.981 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:46.981 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:46.981 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:46.981 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:46.981 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:46.981 Program doxygen found: YES (/usr/bin/doxygen) 00:01:46.981 Configuring doxy-api-html.conf using configuration 00:01:46.981 Configuring doxy-api-man.conf using configuration 00:01:46.981 Program mandb found: YES (/usr/bin/mandb) 00:01:46.981 Program sphinx-build found: NO 00:01:46.981 Configuring rte_build_config.h using configuration 00:01:46.981 Message: 00:01:46.981 ================= 00:01:46.981 Applications Enabled 00:01:46.981 ================= 00:01:46.981 00:01:46.981 apps: 00:01:46.981 00:01:46.981 00:01:46.981 Message: 00:01:46.981 ================= 00:01:46.981 Libraries Enabled 00:01:46.981 ================= 00:01:46.981 00:01:46.981 libs: 00:01:46.981 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:46.981 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:46.981 cryptodev, dmadev, power, reorder, security, vhost, 00:01:46.981 00:01:46.981 Message: 00:01:46.981 =============== 00:01:46.981 Drivers Enabled 00:01:46.981 =============== 00:01:46.981 00:01:46.981 common: 00:01:46.981 00:01:46.981 bus: 00:01:46.981 pci, vdev, 00:01:46.981 mempool: 00:01:46.981 ring, 00:01:46.981 dma: 00:01:46.981 00:01:46.981 net: 00:01:46.981 00:01:46.981 crypto: 00:01:46.981 00:01:46.981 compress: 00:01:46.981 00:01:46.981 vdpa: 00:01:46.981 00:01:46.981 00:01:46.981 Message: 00:01:46.981 ================= 00:01:46.981 Content Skipped 00:01:46.981 ================= 00:01:46.981 00:01:46.981 apps: 00:01:46.981 dumpcap: explicitly disabled via build config 00:01:46.981 graph: explicitly disabled via build config 00:01:46.981 pdump: explicitly disabled via build config 00:01:46.981 proc-info: explicitly disabled via build config 00:01:46.981 test-acl: explicitly disabled via build config 00:01:46.981 test-bbdev: explicitly disabled via build config 00:01:46.981 test-cmdline: explicitly disabled via build config 00:01:46.981 test-compress-perf: explicitly disabled via build config 00:01:46.981 test-crypto-perf: explicitly disabled via build config 00:01:46.981 test-dma-perf: explicitly disabled via build config 00:01:46.981 test-eventdev: explicitly disabled via build config 00:01:46.981 test-fib: explicitly disabled via build config 00:01:46.981 test-flow-perf: explicitly disabled via build config 00:01:46.981 test-gpudev: explicitly disabled via build config 00:01:46.981 test-mldev: explicitly disabled via build config 00:01:46.981 test-pipeline: explicitly disabled via build config 00:01:46.981 test-pmd: explicitly disabled via build config 00:01:46.981 test-regex: explicitly disabled via build config 00:01:46.981 test-sad: explicitly disabled via build config 00:01:46.981 test-security-perf: explicitly disabled via build config 00:01:46.981 00:01:46.981 libs: 00:01:46.981 argparse: explicitly disabled via build config 00:01:46.981 metrics: explicitly disabled via build config 00:01:46.981 acl: explicitly disabled via build config 00:01:46.981 bbdev: explicitly disabled via build config 00:01:46.981 bitratestats: explicitly disabled via build config 00:01:46.981 bpf: explicitly disabled via build config 00:01:46.981 cfgfile: explicitly disabled via build config 00:01:46.981 distributor: explicitly disabled via build config 00:01:46.981 efd: explicitly disabled via build config 00:01:46.981 eventdev: explicitly disabled via build config 00:01:46.981 dispatcher: explicitly disabled via build config 00:01:46.981 gpudev: explicitly disabled via build config 00:01:46.981 gro: explicitly disabled via build config 00:01:46.981 gso: explicitly disabled via build config 00:01:46.981 ip_frag: explicitly disabled via build config 00:01:46.981 jobstats: explicitly disabled via build config 00:01:46.981 latencystats: explicitly disabled via build config 00:01:46.981 lpm: explicitly disabled via build config 00:01:46.981 member: explicitly disabled via build config 00:01:46.981 pcapng: explicitly disabled via build config 00:01:46.981 rawdev: explicitly disabled via build config 00:01:46.981 regexdev: explicitly disabled via build config 00:01:46.981 mldev: explicitly disabled via build config 00:01:46.981 rib: explicitly disabled via build config 00:01:46.981 sched: explicitly disabled via build config 00:01:46.981 stack: explicitly disabled via build config 00:01:46.981 ipsec: explicitly disabled via build config 00:01:46.981 pdcp: explicitly disabled via build config 00:01:46.981 fib: explicitly disabled via build config 00:01:46.981 port: explicitly disabled via build config 00:01:46.981 pdump: explicitly disabled via build config 00:01:46.981 table: explicitly disabled via build config 00:01:46.981 pipeline: explicitly disabled via build config 00:01:46.981 graph: explicitly disabled via build config 00:01:46.981 node: explicitly disabled via build config 00:01:46.981 00:01:46.981 drivers: 00:01:46.981 common/cpt: not in enabled drivers build config 00:01:46.981 common/dpaax: not in enabled drivers build config 00:01:46.981 common/iavf: not in enabled drivers build config 00:01:46.981 common/idpf: not in enabled drivers build config 00:01:46.981 common/ionic: not in enabled drivers build config 00:01:46.981 common/mvep: not in enabled drivers build config 00:01:46.981 common/octeontx: not in enabled drivers build config 00:01:46.981 bus/auxiliary: not in enabled drivers build config 00:01:46.981 bus/cdx: not in enabled drivers build config 00:01:46.981 bus/dpaa: not in enabled drivers build config 00:01:46.981 bus/fslmc: not in enabled drivers build config 00:01:46.981 bus/ifpga: not in enabled drivers build config 00:01:46.981 bus/platform: not in enabled drivers build config 00:01:46.981 bus/uacce: not in enabled drivers build config 00:01:46.981 bus/vmbus: not in enabled drivers build config 00:01:46.981 common/cnxk: not in enabled drivers build config 00:01:46.981 common/mlx5: not in enabled drivers build config 00:01:46.981 common/nfp: not in enabled drivers build config 00:01:46.981 common/nitrox: not in enabled drivers build config 00:01:46.981 common/qat: not in enabled drivers build config 00:01:46.981 common/sfc_efx: not in enabled drivers build config 00:01:46.981 mempool/bucket: not in enabled drivers build config 00:01:46.981 mempool/cnxk: not in enabled drivers build config 00:01:46.981 mempool/dpaa: not in enabled drivers build config 00:01:46.981 mempool/dpaa2: not in enabled drivers build config 00:01:46.981 mempool/octeontx: not in enabled drivers build config 00:01:46.981 mempool/stack: not in enabled drivers build config 00:01:46.981 dma/cnxk: not in enabled drivers build config 00:01:46.981 dma/dpaa: not in enabled drivers build config 00:01:46.981 dma/dpaa2: not in enabled drivers build config 00:01:46.981 dma/hisilicon: not in enabled drivers build config 00:01:46.981 dma/idxd: not in enabled drivers build config 00:01:46.981 dma/ioat: not in enabled drivers build config 00:01:46.981 dma/skeleton: not in enabled drivers build config 00:01:46.981 net/af_packet: not in enabled drivers build config 00:01:46.981 net/af_xdp: not in enabled drivers build config 00:01:46.981 net/ark: not in enabled drivers build config 00:01:46.981 net/atlantic: not in enabled drivers build config 00:01:46.981 net/avp: not in enabled drivers build config 00:01:46.981 net/axgbe: not in enabled drivers build config 00:01:46.981 net/bnx2x: not in enabled drivers build config 00:01:46.981 net/bnxt: not in enabled drivers build config 00:01:46.981 net/bonding: not in enabled drivers build config 00:01:46.981 net/cnxk: not in enabled drivers build config 00:01:46.981 net/cpfl: not in enabled drivers build config 00:01:46.981 net/cxgbe: not in enabled drivers build config 00:01:46.981 net/dpaa: not in enabled drivers build config 00:01:46.981 net/dpaa2: not in enabled drivers build config 00:01:46.981 net/e1000: not in enabled drivers build config 00:01:46.981 net/ena: not in enabled drivers build config 00:01:46.981 net/enetc: not in enabled drivers build config 00:01:46.981 net/enetfec: not in enabled drivers build config 00:01:46.981 net/enic: not in enabled drivers build config 00:01:46.981 net/failsafe: not in enabled drivers build config 00:01:46.981 net/fm10k: not in enabled drivers build config 00:01:46.981 net/gve: not in enabled drivers build config 00:01:46.981 net/hinic: not in enabled drivers build config 00:01:46.981 net/hns3: not in enabled drivers build config 00:01:46.981 net/i40e: not in enabled drivers build config 00:01:46.981 net/iavf: not in enabled drivers build config 00:01:46.981 net/ice: not in enabled drivers build config 00:01:46.981 net/idpf: not in enabled drivers build config 00:01:46.981 net/igc: not in enabled drivers build config 00:01:46.981 net/ionic: not in enabled drivers build config 00:01:46.981 net/ipn3ke: not in enabled drivers build config 00:01:46.981 net/ixgbe: not in enabled drivers build config 00:01:46.981 net/mana: not in enabled drivers build config 00:01:46.981 net/memif: not in enabled drivers build config 00:01:46.981 net/mlx4: not in enabled drivers build config 00:01:46.981 net/mlx5: not in enabled drivers build config 00:01:46.981 net/mvneta: not in enabled drivers build config 00:01:46.981 net/mvpp2: not in enabled drivers build config 00:01:46.981 net/netvsc: not in enabled drivers build config 00:01:46.981 net/nfb: not in enabled drivers build config 00:01:46.981 net/nfp: not in enabled drivers build config 00:01:46.981 net/ngbe: not in enabled drivers build config 00:01:46.981 net/null: not in enabled drivers build config 00:01:46.981 net/octeontx: not in enabled drivers build config 00:01:46.981 net/octeon_ep: not in enabled drivers build config 00:01:46.981 net/pcap: not in enabled drivers build config 00:01:46.981 net/pfe: not in enabled drivers build config 00:01:46.981 net/qede: not in enabled drivers build config 00:01:46.981 net/ring: not in enabled drivers build config 00:01:46.981 net/sfc: not in enabled drivers build config 00:01:46.981 net/softnic: not in enabled drivers build config 00:01:46.981 net/tap: not in enabled drivers build config 00:01:46.981 net/thunderx: not in enabled drivers build config 00:01:46.981 net/txgbe: not in enabled drivers build config 00:01:46.981 net/vdev_netvsc: not in enabled drivers build config 00:01:46.981 net/vhost: not in enabled drivers build config 00:01:46.981 net/virtio: not in enabled drivers build config 00:01:46.981 net/vmxnet3: not in enabled drivers build config 00:01:46.981 raw/*: missing internal dependency, "rawdev" 00:01:46.981 crypto/armv8: not in enabled drivers build config 00:01:46.981 crypto/bcmfs: not in enabled drivers build config 00:01:46.981 crypto/caam_jr: not in enabled drivers build config 00:01:46.981 crypto/ccp: not in enabled drivers build config 00:01:46.981 crypto/cnxk: not in enabled drivers build config 00:01:46.981 crypto/dpaa_sec: not in enabled drivers build config 00:01:46.981 crypto/dpaa2_sec: not in enabled drivers build config 00:01:46.981 crypto/ipsec_mb: not in enabled drivers build config 00:01:46.981 crypto/mlx5: not in enabled drivers build config 00:01:46.981 crypto/mvsam: not in enabled drivers build config 00:01:46.981 crypto/nitrox: not in enabled drivers build config 00:01:46.981 crypto/null: not in enabled drivers build config 00:01:46.981 crypto/octeontx: not in enabled drivers build config 00:01:46.981 crypto/openssl: not in enabled drivers build config 00:01:46.981 crypto/scheduler: not in enabled drivers build config 00:01:46.981 crypto/uadk: not in enabled drivers build config 00:01:46.981 crypto/virtio: not in enabled drivers build config 00:01:46.981 compress/isal: not in enabled drivers build config 00:01:46.981 compress/mlx5: not in enabled drivers build config 00:01:46.981 compress/nitrox: not in enabled drivers build config 00:01:46.981 compress/octeontx: not in enabled drivers build config 00:01:46.981 compress/zlib: not in enabled drivers build config 00:01:46.981 regex/*: missing internal dependency, "regexdev" 00:01:46.981 ml/*: missing internal dependency, "mldev" 00:01:46.981 vdpa/ifc: not in enabled drivers build config 00:01:46.981 vdpa/mlx5: not in enabled drivers build config 00:01:46.981 vdpa/nfp: not in enabled drivers build config 00:01:46.981 vdpa/sfc: not in enabled drivers build config 00:01:46.981 event/*: missing internal dependency, "eventdev" 00:01:46.981 baseband/*: missing internal dependency, "bbdev" 00:01:46.981 gpu/*: missing internal dependency, "gpudev" 00:01:46.981 00:01:46.981 00:01:46.981 Build targets in project: 85 00:01:46.981 00:01:46.981 DPDK 24.03.0 00:01:46.981 00:01:46.981 User defined options 00:01:46.981 buildtype : debug 00:01:46.981 default_library : shared 00:01:46.981 libdir : lib 00:01:46.981 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:46.981 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:46.981 c_link_args : 00:01:46.981 cpu_instruction_set: native 00:01:46.981 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:46.981 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:46.981 enable_docs : false 00:01:46.981 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:46.981 enable_kmods : false 00:01:46.981 max_lcores : 128 00:01:46.981 tests : false 00:01:46.981 00:01:46.981 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:46.981 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:46.981 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:46.981 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:46.982 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:46.982 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:46.982 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:46.982 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:46.982 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:46.982 [8/268] Linking static target lib/librte_kvargs.a 00:01:46.982 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:46.982 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:46.982 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:46.982 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:46.982 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:46.982 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:46.982 [15/268] Linking static target lib/librte_log.a 00:01:47.240 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:47.808 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.808 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:47.808 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:47.808 [20/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:47.808 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:47.808 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:47.808 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:47.808 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:47.808 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:47.808 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:47.808 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:47.808 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:47.808 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:47.808 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:47.808 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:47.808 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:47.808 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:47.808 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:47.808 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:47.808 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:47.808 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:47.808 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:48.068 [39/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:48.068 [40/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:48.068 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:48.068 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:48.068 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:48.068 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.068 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:48.068 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:48.068 [47/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:48.069 [48/268] Linking static target lib/librte_telemetry.a 00:01:48.069 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.069 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:48.069 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:48.069 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:48.069 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:48.069 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:48.069 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.069 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:48.069 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:48.069 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:48.069 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:48.069 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:48.331 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:48.331 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.331 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:48.331 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.331 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:48.331 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:48.331 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:48.331 [68/268] Linking target lib/librte_log.so.24.1 00:01:48.331 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:48.331 [70/268] Linking static target lib/librte_pci.a 00:01:48.602 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:48.602 [72/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:48.602 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.602 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:48.602 [75/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:48.602 [76/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:48.602 [77/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:48.860 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.860 [79/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:48.860 [80/268] Linking target lib/librte_kvargs.so.24.1 00:01:48.860 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:48.860 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:48.860 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.860 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.860 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:48.860 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:48.861 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:48.861 [88/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:48.861 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.861 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:48.861 [91/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.861 [92/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.861 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:48.861 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.861 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.861 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:48.861 [97/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:48.861 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.861 [99/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.861 [100/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:48.861 [101/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.861 [102/268] Linking static target lib/librte_meter.a 00:01:49.120 [103/268] Linking static target lib/librte_ring.a 00:01:49.120 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:49.120 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:49.120 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:49.120 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:49.120 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:49.120 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:49.120 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:49.120 [111/268] Linking target lib/librte_telemetry.so.24.1 00:01:49.120 [112/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:49.120 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:49.121 [114/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:49.121 [115/268] Linking static target lib/librte_mempool.a 00:01:49.121 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:49.121 [117/268] Linking static target lib/librte_rcu.a 00:01:49.121 [118/268] Linking static target lib/librte_eal.a 00:01:49.121 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:49.121 [120/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:49.121 [121/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:49.121 [122/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:49.121 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:49.121 [124/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:49.382 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:49.382 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:49.382 [127/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:49.382 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:49.382 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:49.382 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:49.382 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:49.382 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:49.382 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:49.382 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:49.643 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:49.643 [136/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.643 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:49.643 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:49.643 [139/268] Linking static target lib/librte_net.a 00:01:49.643 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:49.643 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:49.643 [142/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.644 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:49.644 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:49.644 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:49.902 [146/268] Linking static target lib/librte_cmdline.a 00:01:49.903 [147/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.903 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:49.903 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:49.903 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:49.903 [151/268] Linking static target lib/librte_timer.a 00:01:49.903 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:49.903 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:49.903 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:49.903 [155/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:50.161 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:50.161 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.161 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:50.161 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:50.161 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:50.161 [161/268] Linking static target lib/librte_dmadev.a 00:01:50.161 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:50.161 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:50.161 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:50.161 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:50.419 [166/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.419 [167/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:50.419 [168/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:50.419 [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:50.419 [170/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.419 [171/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:50.419 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:50.419 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:50.419 [174/268] Linking static target lib/librte_power.a 00:01:50.419 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:50.419 [176/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:50.419 [177/268] Linking static target lib/librte_hash.a 00:01:50.419 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:50.419 [179/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:50.419 [180/268] Linking static target lib/librte_compressdev.a 00:01:50.419 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:50.419 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:50.419 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:50.419 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:50.676 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:50.676 [186/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.676 [187/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:50.676 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:50.676 [189/268] Linking static target lib/librte_mbuf.a 00:01:50.676 [190/268] Linking static target lib/librte_reorder.a 00:01:50.676 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.676 [192/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:50.676 [193/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:50.676 [194/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:50.676 [195/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:50.676 [196/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:50.676 [197/268] Linking static target lib/librte_security.a 00:01:50.933 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:50.933 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:50.933 [200/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.933 [201/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:50.933 [202/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:50.933 [203/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:50.933 [204/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:50.933 [205/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.933 [206/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:50.933 [207/268] Linking static target drivers/librte_bus_vdev.a 00:01:50.933 [208/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.933 [209/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.933 [210/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.933 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:50.933 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.933 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:50.933 [214/268] Linking static target drivers/librte_bus_pci.a 00:01:51.191 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.191 [216/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.191 [217/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:51.191 [218/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:51.191 [219/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:51.191 [220/268] Linking static target drivers/librte_mempool_ring.a 00:01:51.191 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.191 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:51.191 [223/268] Linking static target lib/librte_cryptodev.a 00:01:51.191 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:51.191 [225/268] Linking static target lib/librte_ethdev.a 00:01:51.449 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.382 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.316 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:55.842 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.842 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.842 [231/268] Linking target lib/librte_eal.so.24.1 00:01:55.842 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:55.842 [233/268] Linking target lib/librte_timer.so.24.1 00:01:55.842 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:55.842 [235/268] Linking target lib/librte_ring.so.24.1 00:01:55.842 [236/268] Linking target lib/librte_pci.so.24.1 00:01:55.842 [237/268] Linking target lib/librte_meter.so.24.1 00:01:55.842 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:55.842 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:55.842 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:55.842 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:55.842 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:55.842 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:55.842 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:55.842 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:55.842 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:55.842 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:55.842 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:55.842 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:55.842 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:56.101 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:56.101 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:56.101 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:56.101 [254/268] Linking target lib/librte_net.so.24.1 00:01:56.101 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:56.101 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:56.101 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:56.359 [258/268] Linking target lib/librte_hash.so.24.1 00:01:56.359 [259/268] Linking target lib/librte_security.so.24.1 00:01:56.359 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:56.359 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:56.359 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:56.359 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:56.359 [264/268] Linking target lib/librte_power.so.24.1 00:01:59.636 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:59.636 [266/268] Linking static target lib/librte_vhost.a 00:02:00.569 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.569 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:00.569 INFO: autodetecting backend as ninja 00:02:00.569 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:01.503 CC lib/ut/ut.o 00:02:01.503 CC lib/ut_mock/mock.o 00:02:01.503 CC lib/log/log.o 00:02:01.503 CC lib/log/log_flags.o 00:02:01.504 CC lib/log/log_deprecated.o 00:02:01.504 LIB libspdk_log.a 00:02:01.504 LIB libspdk_ut.a 00:02:01.504 LIB libspdk_ut_mock.a 00:02:01.504 SO libspdk_ut.so.2.0 00:02:01.504 SO libspdk_log.so.7.0 00:02:01.504 SO libspdk_ut_mock.so.6.0 00:02:01.789 SYMLINK libspdk_ut.so 00:02:01.789 SYMLINK libspdk_ut_mock.so 00:02:01.789 SYMLINK libspdk_log.so 00:02:01.789 CC lib/ioat/ioat.o 00:02:01.789 CXX lib/trace_parser/trace.o 00:02:01.789 CC lib/dma/dma.o 00:02:01.789 CC lib/util/base64.o 00:02:01.789 CC lib/util/bit_array.o 00:02:01.789 CC lib/util/cpuset.o 00:02:01.789 CC lib/util/crc16.o 00:02:01.789 CC lib/util/crc32.o 00:02:01.789 CC lib/util/crc32c.o 00:02:01.789 CC lib/util/crc32_ieee.o 00:02:01.789 CC lib/util/crc64.o 00:02:01.789 CC lib/util/dif.o 00:02:01.789 CC lib/util/fd.o 00:02:01.789 CC lib/util/fd_group.o 00:02:01.789 CC lib/util/file.o 00:02:01.789 CC lib/util/hexlify.o 00:02:01.789 CC lib/util/iov.o 00:02:01.789 CC lib/util/math.o 00:02:01.789 CC lib/util/net.o 00:02:01.789 CC lib/util/pipe.o 00:02:01.789 CC lib/util/string.o 00:02:01.789 CC lib/util/strerror_tls.o 00:02:01.789 CC lib/util/uuid.o 00:02:01.789 CC lib/util/zipf.o 00:02:01.789 CC lib/util/xor.o 00:02:02.048 CC lib/vfio_user/host/vfio_user_pci.o 00:02:02.048 CC lib/vfio_user/host/vfio_user.o 00:02:02.048 LIB libspdk_dma.a 00:02:02.048 SO libspdk_dma.so.4.0 00:02:02.048 SYMLINK libspdk_dma.so 00:02:02.305 LIB libspdk_vfio_user.a 00:02:02.305 LIB libspdk_ioat.a 00:02:02.305 SO libspdk_vfio_user.so.5.0 00:02:02.305 SO libspdk_ioat.so.7.0 00:02:02.305 SYMLINK libspdk_vfio_user.so 00:02:02.305 SYMLINK libspdk_ioat.so 00:02:02.305 LIB libspdk_util.a 00:02:02.563 SO libspdk_util.so.10.0 00:02:02.563 SYMLINK libspdk_util.so 00:02:02.821 LIB libspdk_trace_parser.a 00:02:02.821 CC lib/conf/conf.o 00:02:02.821 CC lib/json/json_parse.o 00:02:02.821 CC lib/idxd/idxd.o 00:02:02.821 CC lib/rdma_utils/rdma_utils.o 00:02:02.821 CC lib/rdma_provider/common.o 00:02:02.821 CC lib/env_dpdk/env.o 00:02:02.821 CC lib/vmd/vmd.o 00:02:02.821 CC lib/json/json_util.o 00:02:02.821 CC lib/vmd/led.o 00:02:02.821 CC lib/env_dpdk/memory.o 00:02:02.821 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:02.821 CC lib/json/json_write.o 00:02:02.821 CC lib/idxd/idxd_user.o 00:02:02.821 CC lib/env_dpdk/pci.o 00:02:02.821 CC lib/idxd/idxd_kernel.o 00:02:02.821 CC lib/env_dpdk/init.o 00:02:02.821 CC lib/env_dpdk/threads.o 00:02:02.821 CC lib/env_dpdk/pci_ioat.o 00:02:02.821 CC lib/env_dpdk/pci_virtio.o 00:02:02.821 CC lib/env_dpdk/pci_vmd.o 00:02:02.821 CC lib/env_dpdk/pci_idxd.o 00:02:02.821 CC lib/env_dpdk/pci_event.o 00:02:02.821 CC lib/env_dpdk/sigbus_handler.o 00:02:02.821 CC lib/env_dpdk/pci_dpdk.o 00:02:02.821 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:02.821 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:02.821 SO libspdk_trace_parser.so.5.0 00:02:02.821 SYMLINK libspdk_trace_parser.so 00:02:03.078 LIB libspdk_rdma_provider.a 00:02:03.078 SO libspdk_rdma_provider.so.6.0 00:02:03.078 LIB libspdk_conf.a 00:02:03.078 SO libspdk_conf.so.6.0 00:02:03.078 LIB libspdk_rdma_utils.a 00:02:03.078 SYMLINK libspdk_rdma_provider.so 00:02:03.078 SO libspdk_rdma_utils.so.1.0 00:02:03.078 SYMLINK libspdk_conf.so 00:02:03.078 SYMLINK libspdk_rdma_utils.so 00:02:03.078 LIB libspdk_json.a 00:02:03.336 SO libspdk_json.so.6.0 00:02:03.336 SYMLINK libspdk_json.so 00:02:03.336 LIB libspdk_idxd.a 00:02:03.336 SO libspdk_idxd.so.12.0 00:02:03.336 CC lib/jsonrpc/jsonrpc_server.o 00:02:03.336 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:03.336 CC lib/jsonrpc/jsonrpc_client.o 00:02:03.336 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:03.336 SYMLINK libspdk_idxd.so 00:02:03.593 LIB libspdk_vmd.a 00:02:03.593 SO libspdk_vmd.so.6.0 00:02:03.593 SYMLINK libspdk_vmd.so 00:02:03.593 LIB libspdk_jsonrpc.a 00:02:03.850 SO libspdk_jsonrpc.so.6.0 00:02:03.850 SYMLINK libspdk_jsonrpc.so 00:02:03.850 CC lib/rpc/rpc.o 00:02:04.108 LIB libspdk_rpc.a 00:02:04.108 SO libspdk_rpc.so.6.0 00:02:04.365 SYMLINK libspdk_rpc.so 00:02:04.365 CC lib/notify/notify.o 00:02:04.365 CC lib/notify/notify_rpc.o 00:02:04.365 CC lib/trace/trace.o 00:02:04.365 CC lib/trace/trace_flags.o 00:02:04.365 CC lib/keyring/keyring.o 00:02:04.365 CC lib/trace/trace_rpc.o 00:02:04.365 CC lib/keyring/keyring_rpc.o 00:02:04.622 LIB libspdk_notify.a 00:02:04.622 SO libspdk_notify.so.6.0 00:02:04.622 SYMLINK libspdk_notify.so 00:02:04.622 LIB libspdk_keyring.a 00:02:04.622 LIB libspdk_trace.a 00:02:04.622 SO libspdk_keyring.so.1.0 00:02:04.622 SO libspdk_trace.so.10.0 00:02:04.622 SYMLINK libspdk_keyring.so 00:02:04.879 SYMLINK libspdk_trace.so 00:02:04.879 LIB libspdk_env_dpdk.a 00:02:04.879 SO libspdk_env_dpdk.so.15.0 00:02:04.879 CC lib/sock/sock.o 00:02:04.879 CC lib/sock/sock_rpc.o 00:02:04.879 CC lib/thread/thread.o 00:02:04.879 CC lib/thread/iobuf.o 00:02:05.137 SYMLINK libspdk_env_dpdk.so 00:02:05.395 LIB libspdk_sock.a 00:02:05.395 SO libspdk_sock.so.10.0 00:02:05.395 SYMLINK libspdk_sock.so 00:02:05.653 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:05.653 CC lib/nvme/nvme_ctrlr.o 00:02:05.653 CC lib/nvme/nvme_fabric.o 00:02:05.653 CC lib/nvme/nvme_ns_cmd.o 00:02:05.653 CC lib/nvme/nvme_ns.o 00:02:05.653 CC lib/nvme/nvme_pcie_common.o 00:02:05.653 CC lib/nvme/nvme_pcie.o 00:02:05.653 CC lib/nvme/nvme_qpair.o 00:02:05.653 CC lib/nvme/nvme.o 00:02:05.653 CC lib/nvme/nvme_quirks.o 00:02:05.653 CC lib/nvme/nvme_transport.o 00:02:05.653 CC lib/nvme/nvme_discovery.o 00:02:05.653 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:05.653 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:05.653 CC lib/nvme/nvme_tcp.o 00:02:05.653 CC lib/nvme/nvme_opal.o 00:02:05.653 CC lib/nvme/nvme_io_msg.o 00:02:05.653 CC lib/nvme/nvme_poll_group.o 00:02:05.653 CC lib/nvme/nvme_zns.o 00:02:05.653 CC lib/nvme/nvme_stubs.o 00:02:05.653 CC lib/nvme/nvme_auth.o 00:02:05.653 CC lib/nvme/nvme_cuse.o 00:02:05.653 CC lib/nvme/nvme_vfio_user.o 00:02:05.653 CC lib/nvme/nvme_rdma.o 00:02:06.584 LIB libspdk_thread.a 00:02:06.584 SO libspdk_thread.so.10.1 00:02:06.584 SYMLINK libspdk_thread.so 00:02:06.841 CC lib/vfu_tgt/tgt_endpoint.o 00:02:06.841 CC lib/virtio/virtio.o 00:02:06.841 CC lib/accel/accel.o 00:02:06.841 CC lib/blob/blobstore.o 00:02:06.841 CC lib/init/json_config.o 00:02:06.841 CC lib/blob/request.o 00:02:06.841 CC lib/accel/accel_rpc.o 00:02:06.841 CC lib/virtio/virtio_vhost_user.o 00:02:06.841 CC lib/blob/zeroes.o 00:02:06.841 CC lib/init/subsystem.o 00:02:06.841 CC lib/vfu_tgt/tgt_rpc.o 00:02:06.841 CC lib/virtio/virtio_vfio_user.o 00:02:06.841 CC lib/accel/accel_sw.o 00:02:06.841 CC lib/blob/blob_bs_dev.o 00:02:06.841 CC lib/init/subsystem_rpc.o 00:02:06.841 CC lib/virtio/virtio_pci.o 00:02:06.841 CC lib/init/rpc.o 00:02:07.098 LIB libspdk_init.a 00:02:07.098 SO libspdk_init.so.5.0 00:02:07.098 LIB libspdk_virtio.a 00:02:07.098 LIB libspdk_vfu_tgt.a 00:02:07.098 SYMLINK libspdk_init.so 00:02:07.098 SO libspdk_vfu_tgt.so.3.0 00:02:07.098 SO libspdk_virtio.so.7.0 00:02:07.098 SYMLINK libspdk_vfu_tgt.so 00:02:07.098 SYMLINK libspdk_virtio.so 00:02:07.355 CC lib/event/app.o 00:02:07.355 CC lib/event/reactor.o 00:02:07.355 CC lib/event/log_rpc.o 00:02:07.355 CC lib/event/app_rpc.o 00:02:07.355 CC lib/event/scheduler_static.o 00:02:07.611 LIB libspdk_event.a 00:02:07.612 SO libspdk_event.so.14.0 00:02:07.869 SYMLINK libspdk_event.so 00:02:07.869 LIB libspdk_accel.a 00:02:07.869 SO libspdk_accel.so.16.0 00:02:07.869 SYMLINK libspdk_accel.so 00:02:07.869 LIB libspdk_nvme.a 00:02:08.126 SO libspdk_nvme.so.13.1 00:02:08.126 CC lib/bdev/bdev.o 00:02:08.126 CC lib/bdev/bdev_rpc.o 00:02:08.126 CC lib/bdev/bdev_zone.o 00:02:08.126 CC lib/bdev/part.o 00:02:08.126 CC lib/bdev/scsi_nvme.o 00:02:08.383 SYMLINK libspdk_nvme.so 00:02:09.755 LIB libspdk_blob.a 00:02:09.755 SO libspdk_blob.so.11.0 00:02:09.755 SYMLINK libspdk_blob.so 00:02:10.012 CC lib/blobfs/blobfs.o 00:02:10.012 CC lib/blobfs/tree.o 00:02:10.012 CC lib/lvol/lvol.o 00:02:10.577 LIB libspdk_bdev.a 00:02:10.577 LIB libspdk_blobfs.a 00:02:10.577 SO libspdk_bdev.so.16.0 00:02:10.577 SO libspdk_blobfs.so.10.0 00:02:10.836 SYMLINK libspdk_blobfs.so 00:02:10.836 LIB libspdk_lvol.a 00:02:10.836 SYMLINK libspdk_bdev.so 00:02:10.836 SO libspdk_lvol.so.10.0 00:02:10.836 SYMLINK libspdk_lvol.so 00:02:10.836 CC lib/scsi/dev.o 00:02:10.836 CC lib/ublk/ublk.o 00:02:10.836 CC lib/nbd/nbd.o 00:02:10.836 CC lib/nvmf/ctrlr.o 00:02:10.836 CC lib/scsi/lun.o 00:02:10.836 CC lib/nbd/nbd_rpc.o 00:02:10.836 CC lib/ublk/ublk_rpc.o 00:02:10.836 CC lib/scsi/port.o 00:02:10.836 CC lib/nvmf/ctrlr_discovery.o 00:02:10.836 CC lib/ftl/ftl_core.o 00:02:10.836 CC lib/nvmf/ctrlr_bdev.o 00:02:10.836 CC lib/scsi/scsi.o 00:02:10.836 CC lib/nvmf/subsystem.o 00:02:10.836 CC lib/ftl/ftl_init.o 00:02:10.836 CC lib/ftl/ftl_layout.o 00:02:10.836 CC lib/nvmf/nvmf.o 00:02:10.836 CC lib/scsi/scsi_bdev.o 00:02:10.836 CC lib/ftl/ftl_debug.o 00:02:10.836 CC lib/nvmf/nvmf_rpc.o 00:02:10.836 CC lib/ftl/ftl_io.o 00:02:10.836 CC lib/scsi/scsi_pr.o 00:02:10.836 CC lib/scsi/scsi_rpc.o 00:02:10.836 CC lib/ftl/ftl_sb.o 00:02:10.836 CC lib/nvmf/tcp.o 00:02:10.836 CC lib/nvmf/transport.o 00:02:10.836 CC lib/scsi/task.o 00:02:10.836 CC lib/ftl/ftl_l2p.o 00:02:10.836 CC lib/nvmf/mdns_server.o 00:02:10.836 CC lib/nvmf/stubs.o 00:02:10.836 CC lib/ftl/ftl_l2p_flat.o 00:02:10.836 CC lib/ftl/ftl_nv_cache.o 00:02:10.836 CC lib/nvmf/vfio_user.o 00:02:10.836 CC lib/ftl/ftl_band.o 00:02:10.836 CC lib/ftl/ftl_band_ops.o 00:02:10.836 CC lib/nvmf/rdma.o 00:02:10.836 CC lib/nvmf/auth.o 00:02:10.836 CC lib/ftl/ftl_writer.o 00:02:10.836 CC lib/ftl/ftl_rq.o 00:02:10.836 CC lib/ftl/ftl_reloc.o 00:02:10.836 CC lib/ftl/ftl_l2p_cache.o 00:02:10.836 CC lib/ftl/ftl_p2l.o 00:02:10.836 CC lib/ftl/mngt/ftl_mngt.o 00:02:10.836 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:10.836 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:10.836 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:10.836 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:10.836 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:10.836 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:11.406 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:11.406 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:11.406 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:11.406 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:11.406 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:11.406 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:11.406 CC lib/ftl/utils/ftl_conf.o 00:02:11.406 CC lib/ftl/utils/ftl_md.o 00:02:11.406 CC lib/ftl/utils/ftl_mempool.o 00:02:11.406 CC lib/ftl/utils/ftl_bitmap.o 00:02:11.406 CC lib/ftl/utils/ftl_property.o 00:02:11.406 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:11.406 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:11.406 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:11.406 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:11.406 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:11.406 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:11.406 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:11.406 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:11.665 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:11.665 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:11.665 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:11.665 CC lib/ftl/base/ftl_base_dev.o 00:02:11.665 CC lib/ftl/base/ftl_base_bdev.o 00:02:11.665 CC lib/ftl/ftl_trace.o 00:02:11.665 LIB libspdk_nbd.a 00:02:11.923 SO libspdk_nbd.so.7.0 00:02:11.923 SYMLINK libspdk_nbd.so 00:02:11.923 LIB libspdk_scsi.a 00:02:11.923 SO libspdk_scsi.so.9.0 00:02:11.923 LIB libspdk_ublk.a 00:02:11.923 SO libspdk_ublk.so.3.0 00:02:12.182 SYMLINK libspdk_scsi.so 00:02:12.182 SYMLINK libspdk_ublk.so 00:02:12.182 CC lib/iscsi/conn.o 00:02:12.182 CC lib/iscsi/init_grp.o 00:02:12.182 CC lib/vhost/vhost.o 00:02:12.182 CC lib/vhost/vhost_rpc.o 00:02:12.182 CC lib/iscsi/iscsi.o 00:02:12.182 CC lib/iscsi/md5.o 00:02:12.182 CC lib/vhost/vhost_scsi.o 00:02:12.182 CC lib/vhost/vhost_blk.o 00:02:12.182 CC lib/iscsi/param.o 00:02:12.182 CC lib/vhost/rte_vhost_user.o 00:02:12.182 CC lib/iscsi/portal_grp.o 00:02:12.182 CC lib/iscsi/tgt_node.o 00:02:12.182 CC lib/iscsi/iscsi_subsystem.o 00:02:12.182 CC lib/iscsi/iscsi_rpc.o 00:02:12.182 CC lib/iscsi/task.o 00:02:12.440 LIB libspdk_ftl.a 00:02:12.698 SO libspdk_ftl.so.9.0 00:02:12.955 SYMLINK libspdk_ftl.so 00:02:13.520 LIB libspdk_vhost.a 00:02:13.520 LIB libspdk_nvmf.a 00:02:13.520 SO libspdk_vhost.so.8.0 00:02:13.520 SO libspdk_nvmf.so.19.0 00:02:13.520 SYMLINK libspdk_vhost.so 00:02:13.520 LIB libspdk_iscsi.a 00:02:13.777 SO libspdk_iscsi.so.8.0 00:02:13.777 SYMLINK libspdk_nvmf.so 00:02:13.777 SYMLINK libspdk_iscsi.so 00:02:14.035 CC module/env_dpdk/env_dpdk_rpc.o 00:02:14.035 CC module/vfu_device/vfu_virtio.o 00:02:14.035 CC module/vfu_device/vfu_virtio_blk.o 00:02:14.035 CC module/vfu_device/vfu_virtio_scsi.o 00:02:14.035 CC module/vfu_device/vfu_virtio_rpc.o 00:02:14.292 CC module/accel/iaa/accel_iaa.o 00:02:14.292 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:14.292 CC module/accel/iaa/accel_iaa_rpc.o 00:02:14.292 CC module/keyring/file/keyring.o 00:02:14.292 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:14.292 CC module/keyring/file/keyring_rpc.o 00:02:14.292 CC module/blob/bdev/blob_bdev.o 00:02:14.292 CC module/accel/error/accel_error.o 00:02:14.292 CC module/scheduler/gscheduler/gscheduler.o 00:02:14.292 CC module/accel/ioat/accel_ioat.o 00:02:14.292 CC module/accel/error/accel_error_rpc.o 00:02:14.292 CC module/accel/ioat/accel_ioat_rpc.o 00:02:14.292 CC module/keyring/linux/keyring.o 00:02:14.292 CC module/sock/posix/posix.o 00:02:14.292 CC module/keyring/linux/keyring_rpc.o 00:02:14.292 CC module/accel/dsa/accel_dsa.o 00:02:14.292 CC module/accel/dsa/accel_dsa_rpc.o 00:02:14.292 LIB libspdk_env_dpdk_rpc.a 00:02:14.292 SO libspdk_env_dpdk_rpc.so.6.0 00:02:14.292 SYMLINK libspdk_env_dpdk_rpc.so 00:02:14.292 LIB libspdk_keyring_linux.a 00:02:14.292 LIB libspdk_keyring_file.a 00:02:14.292 LIB libspdk_scheduler_gscheduler.a 00:02:14.292 LIB libspdk_scheduler_dpdk_governor.a 00:02:14.292 SO libspdk_keyring_linux.so.1.0 00:02:14.292 SO libspdk_keyring_file.so.1.0 00:02:14.292 SO libspdk_scheduler_gscheduler.so.4.0 00:02:14.292 LIB libspdk_accel_error.a 00:02:14.292 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:14.292 LIB libspdk_accel_ioat.a 00:02:14.292 LIB libspdk_scheduler_dynamic.a 00:02:14.549 SO libspdk_accel_error.so.2.0 00:02:14.549 LIB libspdk_accel_iaa.a 00:02:14.549 SO libspdk_accel_ioat.so.6.0 00:02:14.549 SYMLINK libspdk_keyring_linux.so 00:02:14.549 SO libspdk_scheduler_dynamic.so.4.0 00:02:14.549 SYMLINK libspdk_scheduler_gscheduler.so 00:02:14.549 SYMLINK libspdk_keyring_file.so 00:02:14.549 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:14.549 SO libspdk_accel_iaa.so.3.0 00:02:14.549 LIB libspdk_accel_dsa.a 00:02:14.549 SYMLINK libspdk_accel_error.so 00:02:14.549 SYMLINK libspdk_scheduler_dynamic.so 00:02:14.549 SYMLINK libspdk_accel_ioat.so 00:02:14.549 LIB libspdk_blob_bdev.a 00:02:14.549 SO libspdk_accel_dsa.so.5.0 00:02:14.549 SYMLINK libspdk_accel_iaa.so 00:02:14.549 SO libspdk_blob_bdev.so.11.0 00:02:14.549 SYMLINK libspdk_accel_dsa.so 00:02:14.549 SYMLINK libspdk_blob_bdev.so 00:02:14.808 LIB libspdk_vfu_device.a 00:02:14.808 SO libspdk_vfu_device.so.3.0 00:02:14.808 CC module/bdev/gpt/gpt.o 00:02:14.808 CC module/bdev/gpt/vbdev_gpt.o 00:02:14.808 CC module/blobfs/bdev/blobfs_bdev.o 00:02:14.808 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:14.808 CC module/bdev/delay/vbdev_delay.o 00:02:14.808 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:14.808 CC module/bdev/lvol/vbdev_lvol.o 00:02:14.808 CC module/bdev/passthru/vbdev_passthru.o 00:02:14.808 CC module/bdev/null/bdev_null.o 00:02:14.808 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:14.808 CC module/bdev/error/vbdev_error.o 00:02:14.808 CC module/bdev/null/bdev_null_rpc.o 00:02:14.808 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:14.808 CC module/bdev/error/vbdev_error_rpc.o 00:02:14.808 CC module/bdev/raid/bdev_raid.o 00:02:14.808 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:14.808 CC module/bdev/nvme/bdev_nvme.o 00:02:14.808 CC module/bdev/raid/bdev_raid_rpc.o 00:02:14.808 CC module/bdev/malloc/bdev_malloc.o 00:02:14.808 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:14.808 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:14.808 CC module/bdev/nvme/nvme_rpc.o 00:02:14.808 CC module/bdev/nvme/bdev_mdns_client.o 00:02:14.808 CC module/bdev/raid/bdev_raid_sb.o 00:02:14.808 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:14.808 CC module/bdev/split/vbdev_split.o 00:02:14.808 CC module/bdev/raid/raid0.o 00:02:14.808 CC module/bdev/raid/raid1.o 00:02:14.808 CC module/bdev/nvme/vbdev_opal.o 00:02:14.808 CC module/bdev/iscsi/bdev_iscsi.o 00:02:14.808 CC module/bdev/split/vbdev_split_rpc.o 00:02:14.808 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:14.808 CC module/bdev/raid/concat.o 00:02:14.808 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:14.808 CC module/bdev/aio/bdev_aio.o 00:02:14.808 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:14.808 CC module/bdev/aio/bdev_aio_rpc.o 00:02:14.808 CC module/bdev/ftl/bdev_ftl.o 00:02:14.808 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:14.808 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:14.808 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:14.808 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:15.065 SYMLINK libspdk_vfu_device.so 00:02:15.065 LIB libspdk_sock_posix.a 00:02:15.065 SO libspdk_sock_posix.so.6.0 00:02:15.065 LIB libspdk_blobfs_bdev.a 00:02:15.322 SYMLINK libspdk_sock_posix.so 00:02:15.322 SO libspdk_blobfs_bdev.so.6.0 00:02:15.322 LIB libspdk_bdev_split.a 00:02:15.322 SO libspdk_bdev_split.so.6.0 00:02:15.322 SYMLINK libspdk_blobfs_bdev.so 00:02:15.322 LIB libspdk_bdev_gpt.a 00:02:15.322 SO libspdk_bdev_gpt.so.6.0 00:02:15.322 LIB libspdk_bdev_error.a 00:02:15.322 SYMLINK libspdk_bdev_split.so 00:02:15.322 LIB libspdk_bdev_null.a 00:02:15.322 LIB libspdk_bdev_aio.a 00:02:15.322 LIB libspdk_bdev_ftl.a 00:02:15.322 SO libspdk_bdev_error.so.6.0 00:02:15.322 SO libspdk_bdev_null.so.6.0 00:02:15.322 SO libspdk_bdev_aio.so.6.0 00:02:15.322 SO libspdk_bdev_ftl.so.6.0 00:02:15.322 SYMLINK libspdk_bdev_gpt.so 00:02:15.322 LIB libspdk_bdev_passthru.a 00:02:15.322 LIB libspdk_bdev_zone_block.a 00:02:15.322 SO libspdk_bdev_passthru.so.6.0 00:02:15.322 SYMLINK libspdk_bdev_error.so 00:02:15.322 LIB libspdk_bdev_malloc.a 00:02:15.322 SYMLINK libspdk_bdev_null.so 00:02:15.322 SYMLINK libspdk_bdev_aio.so 00:02:15.322 SO libspdk_bdev_zone_block.so.6.0 00:02:15.322 SYMLINK libspdk_bdev_ftl.so 00:02:15.322 SO libspdk_bdev_malloc.so.6.0 00:02:15.322 LIB libspdk_bdev_iscsi.a 00:02:15.322 LIB libspdk_bdev_delay.a 00:02:15.579 SYMLINK libspdk_bdev_passthru.so 00:02:15.579 SO libspdk_bdev_iscsi.so.6.0 00:02:15.579 SO libspdk_bdev_delay.so.6.0 00:02:15.579 SYMLINK libspdk_bdev_zone_block.so 00:02:15.579 SYMLINK libspdk_bdev_malloc.so 00:02:15.579 SYMLINK libspdk_bdev_delay.so 00:02:15.579 SYMLINK libspdk_bdev_iscsi.so 00:02:15.580 LIB libspdk_bdev_lvol.a 00:02:15.580 LIB libspdk_bdev_virtio.a 00:02:15.580 SO libspdk_bdev_lvol.so.6.0 00:02:15.580 SO libspdk_bdev_virtio.so.6.0 00:02:15.580 SYMLINK libspdk_bdev_lvol.so 00:02:15.580 SYMLINK libspdk_bdev_virtio.so 00:02:16.144 LIB libspdk_bdev_raid.a 00:02:16.145 SO libspdk_bdev_raid.so.6.0 00:02:16.145 SYMLINK libspdk_bdev_raid.so 00:02:17.075 LIB libspdk_bdev_nvme.a 00:02:17.335 SO libspdk_bdev_nvme.so.7.0 00:02:17.335 SYMLINK libspdk_bdev_nvme.so 00:02:17.621 CC module/event/subsystems/iobuf/iobuf.o 00:02:17.621 CC module/event/subsystems/keyring/keyring.o 00:02:17.621 CC module/event/subsystems/vmd/vmd.o 00:02:17.621 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:17.621 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:17.621 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:17.621 CC module/event/subsystems/sock/sock.o 00:02:17.621 CC module/event/subsystems/scheduler/scheduler.o 00:02:17.621 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:17.882 LIB libspdk_event_keyring.a 00:02:17.882 LIB libspdk_event_vmd.a 00:02:17.882 LIB libspdk_event_vfu_tgt.a 00:02:17.882 LIB libspdk_event_vhost_blk.a 00:02:17.882 LIB libspdk_event_scheduler.a 00:02:17.882 LIB libspdk_event_sock.a 00:02:17.882 SO libspdk_event_keyring.so.1.0 00:02:17.882 SO libspdk_event_vfu_tgt.so.3.0 00:02:17.882 SO libspdk_event_vhost_blk.so.3.0 00:02:17.882 SO libspdk_event_vmd.so.6.0 00:02:17.882 LIB libspdk_event_iobuf.a 00:02:17.882 SO libspdk_event_scheduler.so.4.0 00:02:17.882 SO libspdk_event_sock.so.5.0 00:02:17.882 SO libspdk_event_iobuf.so.3.0 00:02:17.882 SYMLINK libspdk_event_keyring.so 00:02:17.882 SYMLINK libspdk_event_vhost_blk.so 00:02:17.882 SYMLINK libspdk_event_vfu_tgt.so 00:02:17.882 SYMLINK libspdk_event_sock.so 00:02:17.882 SYMLINK libspdk_event_scheduler.so 00:02:17.882 SYMLINK libspdk_event_vmd.so 00:02:17.882 SYMLINK libspdk_event_iobuf.so 00:02:18.140 CC module/event/subsystems/accel/accel.o 00:02:18.396 LIB libspdk_event_accel.a 00:02:18.396 SO libspdk_event_accel.so.6.0 00:02:18.396 SYMLINK libspdk_event_accel.so 00:02:18.654 CC module/event/subsystems/bdev/bdev.o 00:02:18.654 LIB libspdk_event_bdev.a 00:02:18.654 SO libspdk_event_bdev.so.6.0 00:02:18.911 SYMLINK libspdk_event_bdev.so 00:02:18.911 CC module/event/subsystems/scsi/scsi.o 00:02:18.911 CC module/event/subsystems/ublk/ublk.o 00:02:18.911 CC module/event/subsystems/nbd/nbd.o 00:02:18.911 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:18.911 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:19.169 LIB libspdk_event_ublk.a 00:02:19.169 LIB libspdk_event_nbd.a 00:02:19.169 LIB libspdk_event_scsi.a 00:02:19.169 SO libspdk_event_ublk.so.3.0 00:02:19.169 SO libspdk_event_nbd.so.6.0 00:02:19.169 SO libspdk_event_scsi.so.6.0 00:02:19.169 SYMLINK libspdk_event_nbd.so 00:02:19.169 SYMLINK libspdk_event_ublk.so 00:02:19.169 SYMLINK libspdk_event_scsi.so 00:02:19.169 LIB libspdk_event_nvmf.a 00:02:19.169 SO libspdk_event_nvmf.so.6.0 00:02:19.169 SYMLINK libspdk_event_nvmf.so 00:02:19.426 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:19.426 CC module/event/subsystems/iscsi/iscsi.o 00:02:19.426 LIB libspdk_event_vhost_scsi.a 00:02:19.426 LIB libspdk_event_iscsi.a 00:02:19.426 SO libspdk_event_vhost_scsi.so.3.0 00:02:19.426 SO libspdk_event_iscsi.so.6.0 00:02:19.683 SYMLINK libspdk_event_vhost_scsi.so 00:02:19.683 SYMLINK libspdk_event_iscsi.so 00:02:19.683 SO libspdk.so.6.0 00:02:19.683 SYMLINK libspdk.so 00:02:19.951 CXX app/trace/trace.o 00:02:19.951 CC app/trace_record/trace_record.o 00:02:19.951 CC app/spdk_top/spdk_top.o 00:02:19.951 TEST_HEADER include/spdk/accel.h 00:02:19.951 TEST_HEADER include/spdk/accel_module.h 00:02:19.951 CC app/spdk_nvme_perf/perf.o 00:02:19.951 CC app/spdk_nvme_discover/discovery_aer.o 00:02:19.951 TEST_HEADER include/spdk/assert.h 00:02:19.951 CC app/spdk_nvme_identify/identify.o 00:02:19.951 TEST_HEADER include/spdk/barrier.h 00:02:19.951 CC test/rpc_client/rpc_client_test.o 00:02:19.951 TEST_HEADER include/spdk/base64.h 00:02:19.951 TEST_HEADER include/spdk/bdev.h 00:02:19.951 TEST_HEADER include/spdk/bdev_module.h 00:02:19.951 TEST_HEADER include/spdk/bdev_zone.h 00:02:19.951 TEST_HEADER include/spdk/bit_array.h 00:02:19.951 TEST_HEADER include/spdk/bit_pool.h 00:02:19.951 CC app/spdk_lspci/spdk_lspci.o 00:02:19.951 TEST_HEADER include/spdk/blob_bdev.h 00:02:19.951 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:19.951 TEST_HEADER include/spdk/blobfs.h 00:02:19.951 TEST_HEADER include/spdk/blob.h 00:02:19.951 TEST_HEADER include/spdk/conf.h 00:02:19.951 TEST_HEADER include/spdk/config.h 00:02:19.951 TEST_HEADER include/spdk/cpuset.h 00:02:19.951 TEST_HEADER include/spdk/crc16.h 00:02:19.951 TEST_HEADER include/spdk/crc32.h 00:02:19.951 TEST_HEADER include/spdk/crc64.h 00:02:19.951 TEST_HEADER include/spdk/dif.h 00:02:19.951 TEST_HEADER include/spdk/dma.h 00:02:19.951 TEST_HEADER include/spdk/endian.h 00:02:19.951 TEST_HEADER include/spdk/env_dpdk.h 00:02:19.951 TEST_HEADER include/spdk/env.h 00:02:19.951 TEST_HEADER include/spdk/event.h 00:02:19.951 TEST_HEADER include/spdk/fd.h 00:02:19.951 TEST_HEADER include/spdk/fd_group.h 00:02:19.951 TEST_HEADER include/spdk/file.h 00:02:19.951 TEST_HEADER include/spdk/ftl.h 00:02:19.951 TEST_HEADER include/spdk/gpt_spec.h 00:02:19.951 TEST_HEADER include/spdk/hexlify.h 00:02:19.951 TEST_HEADER include/spdk/histogram_data.h 00:02:19.951 TEST_HEADER include/spdk/idxd.h 00:02:19.951 TEST_HEADER include/spdk/idxd_spec.h 00:02:19.951 TEST_HEADER include/spdk/init.h 00:02:19.951 TEST_HEADER include/spdk/ioat.h 00:02:19.951 TEST_HEADER include/spdk/ioat_spec.h 00:02:19.951 TEST_HEADER include/spdk/iscsi_spec.h 00:02:19.951 TEST_HEADER include/spdk/json.h 00:02:19.951 TEST_HEADER include/spdk/jsonrpc.h 00:02:19.951 TEST_HEADER include/spdk/keyring.h 00:02:19.951 TEST_HEADER include/spdk/keyring_module.h 00:02:19.951 TEST_HEADER include/spdk/likely.h 00:02:19.951 TEST_HEADER include/spdk/log.h 00:02:19.951 TEST_HEADER include/spdk/lvol.h 00:02:19.951 TEST_HEADER include/spdk/memory.h 00:02:19.951 TEST_HEADER include/spdk/mmio.h 00:02:19.951 TEST_HEADER include/spdk/nbd.h 00:02:19.951 TEST_HEADER include/spdk/net.h 00:02:19.951 TEST_HEADER include/spdk/notify.h 00:02:19.951 TEST_HEADER include/spdk/nvme.h 00:02:19.951 TEST_HEADER include/spdk/nvme_intel.h 00:02:19.951 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:19.951 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:19.951 TEST_HEADER include/spdk/nvme_spec.h 00:02:19.951 TEST_HEADER include/spdk/nvme_zns.h 00:02:19.951 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:19.951 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:19.951 TEST_HEADER include/spdk/nvmf.h 00:02:19.951 TEST_HEADER include/spdk/nvmf_spec.h 00:02:19.951 TEST_HEADER include/spdk/nvmf_transport.h 00:02:19.951 TEST_HEADER include/spdk/opal.h 00:02:19.951 TEST_HEADER include/spdk/opal_spec.h 00:02:19.951 TEST_HEADER include/spdk/pci_ids.h 00:02:19.951 TEST_HEADER include/spdk/queue.h 00:02:19.951 TEST_HEADER include/spdk/pipe.h 00:02:19.951 TEST_HEADER include/spdk/reduce.h 00:02:19.951 TEST_HEADER include/spdk/rpc.h 00:02:19.951 TEST_HEADER include/spdk/scsi.h 00:02:19.951 TEST_HEADER include/spdk/scheduler.h 00:02:19.951 TEST_HEADER include/spdk/scsi_spec.h 00:02:19.951 TEST_HEADER include/spdk/sock.h 00:02:19.951 TEST_HEADER include/spdk/stdinc.h 00:02:19.951 TEST_HEADER include/spdk/string.h 00:02:19.951 TEST_HEADER include/spdk/thread.h 00:02:19.951 TEST_HEADER include/spdk/trace.h 00:02:19.951 TEST_HEADER include/spdk/tree.h 00:02:19.951 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:19.951 TEST_HEADER include/spdk/trace_parser.h 00:02:19.951 TEST_HEADER include/spdk/ublk.h 00:02:19.951 TEST_HEADER include/spdk/util.h 00:02:19.951 TEST_HEADER include/spdk/uuid.h 00:02:19.951 TEST_HEADER include/spdk/version.h 00:02:19.951 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:19.951 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:19.951 TEST_HEADER include/spdk/vhost.h 00:02:19.951 TEST_HEADER include/spdk/vmd.h 00:02:19.951 TEST_HEADER include/spdk/xor.h 00:02:19.951 TEST_HEADER include/spdk/zipf.h 00:02:19.951 CXX test/cpp_headers/accel.o 00:02:19.951 CC app/spdk_dd/spdk_dd.o 00:02:19.951 CXX test/cpp_headers/accel_module.o 00:02:19.951 CXX test/cpp_headers/assert.o 00:02:19.951 CXX test/cpp_headers/barrier.o 00:02:19.951 CXX test/cpp_headers/base64.o 00:02:19.951 CXX test/cpp_headers/bdev.o 00:02:19.951 CXX test/cpp_headers/bdev_module.o 00:02:19.951 CXX test/cpp_headers/bdev_zone.o 00:02:19.951 CXX test/cpp_headers/bit_array.o 00:02:19.951 CXX test/cpp_headers/bit_pool.o 00:02:19.952 CXX test/cpp_headers/blob_bdev.o 00:02:19.952 CXX test/cpp_headers/blobfs_bdev.o 00:02:19.952 CXX test/cpp_headers/blobfs.o 00:02:19.952 CXX test/cpp_headers/blob.o 00:02:19.952 CXX test/cpp_headers/conf.o 00:02:19.952 CXX test/cpp_headers/config.o 00:02:19.952 CC app/nvmf_tgt/nvmf_main.o 00:02:19.952 CXX test/cpp_headers/cpuset.o 00:02:19.952 CXX test/cpp_headers/crc16.o 00:02:19.952 CC app/iscsi_tgt/iscsi_tgt.o 00:02:19.952 CXX test/cpp_headers/crc32.o 00:02:19.952 CC examples/util/zipf/zipf.o 00:02:19.952 CC examples/ioat/perf/perf.o 00:02:19.952 CC examples/ioat/verify/verify.o 00:02:19.952 CC app/spdk_tgt/spdk_tgt.o 00:02:19.952 CC test/env/pci/pci_ut.o 00:02:19.952 CC test/env/memory/memory_ut.o 00:02:19.952 CC test/app/histogram_perf/histogram_perf.o 00:02:19.952 CC test/app/jsoncat/jsoncat.o 00:02:19.952 CC test/env/vtophys/vtophys.o 00:02:19.952 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:19.952 CC test/thread/poller_perf/poller_perf.o 00:02:19.952 CC app/fio/nvme/fio_plugin.o 00:02:19.952 CC test/app/stub/stub.o 00:02:20.209 CC test/dma/test_dma/test_dma.o 00:02:20.209 CC app/fio/bdev/fio_plugin.o 00:02:20.209 CC test/app/bdev_svc/bdev_svc.o 00:02:20.209 CC test/env/mem_callbacks/mem_callbacks.o 00:02:20.209 LINK spdk_lspci 00:02:20.209 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:20.209 LINK rpc_client_test 00:02:20.475 LINK interrupt_tgt 00:02:20.475 LINK spdk_nvme_discover 00:02:20.475 LINK zipf 00:02:20.475 LINK jsoncat 00:02:20.475 LINK vtophys 00:02:20.475 LINK nvmf_tgt 00:02:20.475 CXX test/cpp_headers/crc64.o 00:02:20.475 LINK histogram_perf 00:02:20.475 LINK poller_perf 00:02:20.475 CXX test/cpp_headers/dif.o 00:02:20.475 CXX test/cpp_headers/dma.o 00:02:20.475 LINK env_dpdk_post_init 00:02:20.475 CXX test/cpp_headers/endian.o 00:02:20.475 CXX test/cpp_headers/env_dpdk.o 00:02:20.475 CXX test/cpp_headers/env.o 00:02:20.475 CXX test/cpp_headers/event.o 00:02:20.475 CXX test/cpp_headers/fd_group.o 00:02:20.475 CXX test/cpp_headers/fd.o 00:02:20.475 CXX test/cpp_headers/file.o 00:02:20.475 CXX test/cpp_headers/ftl.o 00:02:20.475 CXX test/cpp_headers/gpt_spec.o 00:02:20.475 CXX test/cpp_headers/hexlify.o 00:02:20.475 LINK iscsi_tgt 00:02:20.475 CXX test/cpp_headers/histogram_data.o 00:02:20.475 LINK stub 00:02:20.475 CXX test/cpp_headers/idxd.o 00:02:20.475 LINK spdk_trace_record 00:02:20.475 LINK ioat_perf 00:02:20.475 LINK spdk_tgt 00:02:20.475 LINK bdev_svc 00:02:20.475 LINK verify 00:02:20.475 CXX test/cpp_headers/idxd_spec.o 00:02:20.475 CXX test/cpp_headers/init.o 00:02:20.475 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:20.475 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:20.475 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:20.733 CXX test/cpp_headers/ioat.o 00:02:20.733 CXX test/cpp_headers/ioat_spec.o 00:02:20.733 CXX test/cpp_headers/iscsi_spec.o 00:02:20.733 CXX test/cpp_headers/json.o 00:02:20.733 LINK spdk_trace 00:02:20.733 LINK spdk_dd 00:02:20.733 CXX test/cpp_headers/jsonrpc.o 00:02:20.733 CXX test/cpp_headers/keyring.o 00:02:20.733 CXX test/cpp_headers/keyring_module.o 00:02:20.733 CXX test/cpp_headers/likely.o 00:02:20.733 CXX test/cpp_headers/log.o 00:02:20.733 CXX test/cpp_headers/lvol.o 00:02:20.733 CXX test/cpp_headers/memory.o 00:02:20.733 CXX test/cpp_headers/mmio.o 00:02:20.733 CXX test/cpp_headers/nbd.o 00:02:20.733 CXX test/cpp_headers/net.o 00:02:20.733 LINK pci_ut 00:02:20.733 CXX test/cpp_headers/notify.o 00:02:20.733 CXX test/cpp_headers/nvme.o 00:02:20.733 CXX test/cpp_headers/nvme_intel.o 00:02:20.733 CXX test/cpp_headers/nvme_ocssd.o 00:02:20.733 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:20.733 CXX test/cpp_headers/nvme_spec.o 00:02:20.733 CXX test/cpp_headers/nvme_zns.o 00:02:20.733 CXX test/cpp_headers/nvmf_cmd.o 00:02:20.993 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:20.993 CXX test/cpp_headers/nvmf.o 00:02:20.993 LINK test_dma 00:02:20.993 CXX test/cpp_headers/nvmf_spec.o 00:02:20.993 CXX test/cpp_headers/nvmf_transport.o 00:02:20.993 CXX test/cpp_headers/opal.o 00:02:20.993 CXX test/cpp_headers/opal_spec.o 00:02:20.993 CXX test/cpp_headers/pci_ids.o 00:02:20.993 CXX test/cpp_headers/pipe.o 00:02:20.993 CXX test/cpp_headers/queue.o 00:02:20.993 CC examples/sock/hello_world/hello_sock.o 00:02:20.993 CXX test/cpp_headers/reduce.o 00:02:20.993 LINK nvme_fuzz 00:02:20.993 CC examples/vmd/lsvmd/lsvmd.o 00:02:20.993 CC examples/thread/thread/thread_ex.o 00:02:20.993 CC examples/vmd/led/led.o 00:02:20.993 CC test/event/event_perf/event_perf.o 00:02:20.993 LINK spdk_bdev 00:02:20.993 CC examples/idxd/perf/perf.o 00:02:21.254 CXX test/cpp_headers/rpc.o 00:02:21.254 CC test/event/reactor/reactor.o 00:02:21.254 LINK spdk_nvme 00:02:21.254 CXX test/cpp_headers/scheduler.o 00:02:21.254 CXX test/cpp_headers/scsi.o 00:02:21.254 CXX test/cpp_headers/scsi_spec.o 00:02:21.254 CXX test/cpp_headers/sock.o 00:02:21.254 CXX test/cpp_headers/stdinc.o 00:02:21.254 CXX test/cpp_headers/string.o 00:02:21.254 CC test/event/reactor_perf/reactor_perf.o 00:02:21.254 CXX test/cpp_headers/thread.o 00:02:21.254 CXX test/cpp_headers/trace.o 00:02:21.254 CXX test/cpp_headers/trace_parser.o 00:02:21.254 CC test/event/app_repeat/app_repeat.o 00:02:21.254 CC app/vhost/vhost.o 00:02:21.254 CXX test/cpp_headers/tree.o 00:02:21.254 CXX test/cpp_headers/ublk.o 00:02:21.254 CXX test/cpp_headers/util.o 00:02:21.254 LINK vhost_fuzz 00:02:21.254 CXX test/cpp_headers/uuid.o 00:02:21.254 CXX test/cpp_headers/version.o 00:02:21.254 CXX test/cpp_headers/vfio_user_pci.o 00:02:21.254 CXX test/cpp_headers/vfio_user_spec.o 00:02:21.254 CXX test/cpp_headers/vhost.o 00:02:21.254 CXX test/cpp_headers/vmd.o 00:02:21.254 LINK lsvmd 00:02:21.254 CXX test/cpp_headers/xor.o 00:02:21.254 CXX test/cpp_headers/zipf.o 00:02:21.514 CC test/event/scheduler/scheduler.o 00:02:21.514 LINK spdk_nvme_perf 00:02:21.514 LINK led 00:02:21.514 LINK mem_callbacks 00:02:21.514 LINK event_perf 00:02:21.514 LINK reactor 00:02:21.514 LINK spdk_nvme_identify 00:02:21.514 LINK hello_sock 00:02:21.514 LINK reactor_perf 00:02:21.514 LINK thread 00:02:21.514 LINK spdk_top 00:02:21.514 CC test/nvme/err_injection/err_injection.o 00:02:21.514 CC test/nvme/startup/startup.o 00:02:21.514 CC test/nvme/sgl/sgl.o 00:02:21.514 CC test/nvme/aer/aer.o 00:02:21.514 CC test/nvme/reserve/reserve.o 00:02:21.514 CC test/nvme/reset/reset.o 00:02:21.514 CC test/nvme/e2edp/nvme_dp.o 00:02:21.772 CC test/nvme/overhead/overhead.o 00:02:21.772 LINK app_repeat 00:02:21.772 CC test/accel/dif/dif.o 00:02:21.772 CC test/nvme/simple_copy/simple_copy.o 00:02:21.772 LINK vhost 00:02:21.772 CC test/blobfs/mkfs/mkfs.o 00:02:21.772 CC test/nvme/boot_partition/boot_partition.o 00:02:21.772 CC test/nvme/connect_stress/connect_stress.o 00:02:21.772 CC test/lvol/esnap/esnap.o 00:02:21.772 CC test/nvme/compliance/nvme_compliance.o 00:02:21.772 CC test/nvme/fused_ordering/fused_ordering.o 00:02:21.772 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:21.772 CC test/nvme/fdp/fdp.o 00:02:21.772 LINK idxd_perf 00:02:21.772 CC test/nvme/cuse/cuse.o 00:02:21.772 LINK scheduler 00:02:21.772 LINK err_injection 00:02:21.772 LINK startup 00:02:22.030 LINK mkfs 00:02:22.030 LINK connect_stress 00:02:22.030 LINK reserve 00:02:22.030 LINK simple_copy 00:02:22.030 LINK nvme_dp 00:02:22.030 LINK aer 00:02:22.030 LINK fused_ordering 00:02:22.030 LINK boot_partition 00:02:22.030 CC examples/nvme/reconnect/reconnect.o 00:02:22.030 CC examples/nvme/arbitration/arbitration.o 00:02:22.030 CC examples/nvme/hello_world/hello_world.o 00:02:22.030 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:22.030 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:22.030 CC examples/nvme/abort/abort.o 00:02:22.030 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:22.030 CC examples/nvme/hotplug/hotplug.o 00:02:22.030 CC examples/accel/perf/accel_perf.o 00:02:22.030 LINK sgl 00:02:22.030 CC examples/blob/cli/blobcli.o 00:02:22.030 LINK doorbell_aers 00:02:22.030 LINK reset 00:02:22.030 CC examples/blob/hello_world/hello_blob.o 00:02:22.030 LINK overhead 00:02:22.030 LINK memory_ut 00:02:22.288 LINK fdp 00:02:22.288 LINK nvme_compliance 00:02:22.288 LINK dif 00:02:22.288 LINK pmr_persistence 00:02:22.288 LINK cmb_copy 00:02:22.288 LINK hello_world 00:02:22.546 LINK hotplug 00:02:22.546 LINK hello_blob 00:02:22.546 LINK abort 00:02:22.546 LINK reconnect 00:02:22.546 LINK arbitration 00:02:22.546 LINK accel_perf 00:02:22.546 LINK nvme_manage 00:02:22.546 LINK blobcli 00:02:22.802 CC test/bdev/bdevio/bdevio.o 00:02:22.802 LINK iscsi_fuzz 00:02:23.059 CC examples/bdev/hello_world/hello_bdev.o 00:02:23.059 CC examples/bdev/bdevperf/bdevperf.o 00:02:23.059 LINK bdevio 00:02:23.316 LINK hello_bdev 00:02:23.316 LINK cuse 00:02:23.881 LINK bdevperf 00:02:24.138 CC examples/nvmf/nvmf/nvmf.o 00:02:24.394 LINK nvmf 00:02:26.919 LINK esnap 00:02:26.919 00:02:26.919 real 0m49.534s 00:02:26.919 user 10m6.401s 00:02:26.919 sys 2m27.865s 00:02:26.919 19:31:44 make -- common/autotest_common.sh@1127 -- $ xtrace_disable 00:02:26.919 19:31:44 make -- common/autotest_common.sh@10 -- $ set +x 00:02:26.919 ************************************ 00:02:26.919 END TEST make 00:02:26.919 ************************************ 00:02:26.919 19:31:44 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:26.919 19:31:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:26.919 19:31:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:26.919 19:31:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.919 19:31:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:26.919 19:31:44 -- pm/common@44 -- $ pid=962956 00:02:26.919 19:31:44 -- pm/common@50 -- $ kill -TERM 962956 00:02:26.919 19:31:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.919 19:31:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:26.919 19:31:44 -- pm/common@44 -- $ pid=962958 00:02:26.919 19:31:44 -- pm/common@50 -- $ kill -TERM 962958 00:02:26.919 19:31:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.919 19:31:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:26.919 19:31:44 -- pm/common@44 -- $ pid=962960 00:02:26.919 19:31:44 -- pm/common@50 -- $ kill -TERM 962960 00:02:26.919 19:31:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:26.919 19:31:44 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:26.919 19:31:44 -- pm/common@44 -- $ pid=962988 00:02:26.919 19:31:44 -- pm/common@50 -- $ sudo -E kill -TERM 962988 00:02:27.177 19:31:44 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:27.177 19:31:44 -- nvmf/common.sh@7 -- # uname -s 00:02:27.177 19:31:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:27.177 19:31:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:27.177 19:31:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:27.177 19:31:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:27.177 19:31:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:27.177 19:31:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:27.177 19:31:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:27.177 19:31:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:27.178 19:31:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:27.178 19:31:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:27.178 19:31:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:27.178 19:31:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:27.178 19:31:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:27.178 19:31:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:27.178 19:31:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:27.178 19:31:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:27.178 19:31:44 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:27.178 19:31:44 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:27.178 19:31:44 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:27.178 19:31:44 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:27.178 19:31:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.178 19:31:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.178 19:31:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.178 19:31:44 -- paths/export.sh@5 -- # export PATH 00:02:27.178 19:31:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:27.178 19:31:44 -- nvmf/common.sh@51 -- # : 0 00:02:27.178 19:31:44 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:27.178 19:31:44 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:27.178 19:31:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:27.178 19:31:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:27.178 19:31:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:27.178 19:31:44 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:27.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:27.178 19:31:44 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:27.178 19:31:44 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:27.178 19:31:44 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:27.178 19:31:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:27.178 19:31:44 -- spdk/autotest.sh@32 -- # uname -s 00:02:27.178 19:31:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:27.178 19:31:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:27.178 19:31:44 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:27.178 19:31:44 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:27.178 19:31:44 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:27.178 19:31:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:27.178 19:31:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:27.178 19:31:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:27.178 19:31:44 -- spdk/autotest.sh@48 -- # udevadm_pid=1018472 00:02:27.178 19:31:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:27.178 19:31:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:27.178 19:31:44 -- pm/common@17 -- # local monitor 00:02:27.178 19:31:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.178 19:31:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.178 19:31:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.178 19:31:44 -- pm/common@21 -- # date +%s 00:02:27.178 19:31:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:27.178 19:31:44 -- pm/common@21 -- # date +%s 00:02:27.178 19:31:44 -- pm/common@25 -- # sleep 1 00:02:27.178 19:31:44 -- pm/common@21 -- # date +%s 00:02:27.178 19:31:44 -- pm/common@21 -- # date +%s 00:02:27.178 19:31:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721842304 00:02:27.178 19:31:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721842304 00:02:27.178 19:31:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721842304 00:02:27.178 19:31:44 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721842304 00:02:27.178 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721842304_collect-vmstat.pm.log 00:02:27.178 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721842304_collect-cpu-load.pm.log 00:02:27.178 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721842304_collect-cpu-temp.pm.log 00:02:27.178 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721842304_collect-bmc-pm.bmc.pm.log 00:02:28.113 19:31:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:28.113 19:31:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:28.113 19:31:45 -- common/autotest_common.sh@725 -- # xtrace_disable 00:02:28.113 19:31:45 -- common/autotest_common.sh@10 -- # set +x 00:02:28.113 19:31:45 -- spdk/autotest.sh@59 -- # create_test_list 00:02:28.113 19:31:45 -- common/autotest_common.sh@749 -- # xtrace_disable 00:02:28.113 19:31:45 -- common/autotest_common.sh@10 -- # set +x 00:02:28.113 19:31:45 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:28.113 19:31:45 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.113 19:31:45 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.113 19:31:45 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:28.113 19:31:45 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:28.113 19:31:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:28.113 19:31:45 -- common/autotest_common.sh@1456 -- # uname 00:02:28.113 19:31:45 -- common/autotest_common.sh@1456 -- # '[' Linux = FreeBSD ']' 00:02:28.113 19:31:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:28.113 19:31:45 -- common/autotest_common.sh@1476 -- # uname 00:02:28.113 19:31:45 -- common/autotest_common.sh@1476 -- # [[ Linux = FreeBSD ]] 00:02:28.113 19:31:45 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:28.113 19:31:45 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:28.113 19:31:45 -- spdk/autotest.sh@72 -- # hash lcov 00:02:28.113 19:31:45 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:28.113 19:31:45 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:28.113 --rc lcov_branch_coverage=1 00:02:28.113 --rc lcov_function_coverage=1 00:02:28.113 --rc genhtml_branch_coverage=1 00:02:28.113 --rc genhtml_function_coverage=1 00:02:28.113 --rc genhtml_legend=1 00:02:28.113 --rc geninfo_all_blocks=1 00:02:28.113 ' 00:02:28.113 19:31:45 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:28.113 --rc lcov_branch_coverage=1 00:02:28.113 --rc lcov_function_coverage=1 00:02:28.113 --rc genhtml_branch_coverage=1 00:02:28.113 --rc genhtml_function_coverage=1 00:02:28.113 --rc genhtml_legend=1 00:02:28.113 --rc geninfo_all_blocks=1 00:02:28.113 ' 00:02:28.113 19:31:45 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:28.113 --rc lcov_branch_coverage=1 00:02:28.113 --rc lcov_function_coverage=1 00:02:28.113 --rc genhtml_branch_coverage=1 00:02:28.113 --rc genhtml_function_coverage=1 00:02:28.113 --rc genhtml_legend=1 00:02:28.113 --rc geninfo_all_blocks=1 00:02:28.113 --no-external' 00:02:28.113 19:31:45 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:28.113 --rc lcov_branch_coverage=1 00:02:28.113 --rc lcov_function_coverage=1 00:02:28.113 --rc genhtml_branch_coverage=1 00:02:28.113 --rc genhtml_function_coverage=1 00:02:28.113 --rc genhtml_legend=1 00:02:28.113 --rc geninfo_all_blocks=1 00:02:28.113 --no-external' 00:02:28.113 19:31:45 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:28.371 lcov: LCOV version 1.14 00:02:28.371 19:31:45 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:29.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:29.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:29.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:29.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:29.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:29.742 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:30.000 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:30.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:30.001 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:30.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:30.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:30.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:30.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:30.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:30.002 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:30.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:30.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:30.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:30.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:30.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:30.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:30.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:30.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:30.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:30.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:30.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:30.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:30.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:30.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:30.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:30.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:30.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:30.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:30.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:30.259 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:45.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:45.117 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:07.054 19:32:21 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:07.054 19:32:21 -- common/autotest_common.sh@725 -- # xtrace_disable 00:03:07.054 19:32:21 -- common/autotest_common.sh@10 -- # set +x 00:03:07.054 19:32:21 -- spdk/autotest.sh@91 -- # rm -f 00:03:07.054 19:32:21 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:07.054 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:07.054 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:07.054 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:07.054 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:07.054 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:07.054 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:07.054 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:07.054 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:07.054 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:07.054 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:07.054 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:07.054 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:07.054 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:07.054 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:07.054 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:07.054 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:07.054 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:07.054 19:32:23 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:07.054 19:32:23 -- common/autotest_common.sh@1670 -- # zoned_devs=() 00:03:07.054 19:32:23 -- common/autotest_common.sh@1670 -- # local -gA zoned_devs 00:03:07.054 19:32:23 -- common/autotest_common.sh@1671 -- # local nvme bdf 00:03:07.054 19:32:23 -- common/autotest_common.sh@1673 -- # for nvme in /sys/block/nvme* 00:03:07.054 19:32:23 -- common/autotest_common.sh@1674 -- # is_block_zoned nvme0n1 00:03:07.054 19:32:23 -- common/autotest_common.sh@1663 -- # local device=nvme0n1 00:03:07.054 19:32:23 -- common/autotest_common.sh@1665 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:07.054 19:32:23 -- common/autotest_common.sh@1666 -- # [[ none != none ]] 00:03:07.054 19:32:23 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:07.054 19:32:23 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:07.054 19:32:23 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:07.054 19:32:23 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:07.054 19:32:23 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:07.054 19:32:23 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:07.054 No valid GPT data, bailing 00:03:07.054 19:32:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:07.054 19:32:23 -- scripts/common.sh@391 -- # pt= 00:03:07.054 19:32:23 -- scripts/common.sh@392 -- # return 1 00:03:07.054 19:32:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:07.054 1+0 records in 00:03:07.054 1+0 records out 00:03:07.054 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00246423 s, 426 MB/s 00:03:07.054 19:32:23 -- spdk/autotest.sh@118 -- # sync 00:03:07.054 19:32:23 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:07.054 19:32:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:07.054 19:32:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:07.989 19:32:25 -- spdk/autotest.sh@124 -- # uname -s 00:03:07.989 19:32:25 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:07.989 19:32:25 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:07.989 19:32:25 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:07.989 19:32:25 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:07.989 19:32:25 -- common/autotest_common.sh@10 -- # set +x 00:03:07.989 ************************************ 00:03:07.989 START TEST setup.sh 00:03:07.989 ************************************ 00:03:07.989 19:32:25 setup.sh -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:07.989 * Looking for test storage... 00:03:07.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:07.989 19:32:25 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:07.989 19:32:25 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:07.989 19:32:25 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:07.989 19:32:25 setup.sh -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:07.989 19:32:25 setup.sh -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:07.989 19:32:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:07.989 ************************************ 00:03:07.989 START TEST acl 00:03:07.989 ************************************ 00:03:07.989 19:32:25 setup.sh.acl -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:07.989 * Looking for test storage... 00:03:07.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:07.989 19:32:25 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:07.989 19:32:25 setup.sh.acl -- common/autotest_common.sh@1670 -- # zoned_devs=() 00:03:07.989 19:32:25 setup.sh.acl -- common/autotest_common.sh@1670 -- # local -gA zoned_devs 00:03:07.989 19:32:25 setup.sh.acl -- common/autotest_common.sh@1671 -- # local nvme bdf 00:03:07.989 19:32:25 setup.sh.acl -- common/autotest_common.sh@1673 -- # for nvme in /sys/block/nvme* 00:03:07.989 19:32:25 setup.sh.acl -- common/autotest_common.sh@1674 -- # is_block_zoned nvme0n1 00:03:07.989 19:32:25 setup.sh.acl -- common/autotest_common.sh@1663 -- # local device=nvme0n1 00:03:07.989 19:32:25 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:07.989 19:32:25 setup.sh.acl -- common/autotest_common.sh@1666 -- # [[ none != none ]] 00:03:07.989 19:32:25 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:07.989 19:32:25 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:07.989 19:32:25 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:07.989 19:32:25 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:07.989 19:32:25 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:07.989 19:32:25 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:07.989 19:32:25 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.364 19:32:26 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:09.364 19:32:26 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:09.364 19:32:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.364 19:32:26 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:09.364 19:32:26 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.364 19:32:26 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:10.298 Hugepages 00:03:10.298 node hugesize free / total 00:03:10.298 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:10.298 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:10.298 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.298 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:10.298 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:10.298 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.556 00:03:10.556 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.556 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:10.557 19:32:27 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:10.557 19:32:27 setup.sh.acl -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:10.557 19:32:27 setup.sh.acl -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:10.557 19:32:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:10.557 ************************************ 00:03:10.557 START TEST denied 00:03:10.557 ************************************ 00:03:10.557 19:32:27 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # denied 00:03:10.557 19:32:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:10.557 19:32:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:10.557 19:32:27 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:10.557 19:32:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.557 19:32:27 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.929 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:11.929 19:32:29 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:11.929 19:32:29 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:11.929 19:32:29 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:11.929 19:32:29 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:11.929 19:32:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:11.929 19:32:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:11.929 19:32:29 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:11.929 19:32:29 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:11.929 19:32:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:11.929 19:32:29 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.458 00:03:14.458 real 0m3.697s 00:03:14.458 user 0m0.986s 00:03:14.458 sys 0m1.809s 00:03:14.458 19:32:31 setup.sh.acl.denied -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:14.458 19:32:31 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:14.458 ************************************ 00:03:14.458 END TEST denied 00:03:14.458 ************************************ 00:03:14.458 19:32:31 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:14.458 19:32:31 setup.sh.acl -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:14.458 19:32:31 setup.sh.acl -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:14.458 19:32:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:14.458 ************************************ 00:03:14.458 START TEST allowed 00:03:14.458 ************************************ 00:03:14.458 19:32:31 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # allowed 00:03:14.458 19:32:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:14.458 19:32:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:14.458 19:32:31 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:14.458 19:32:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.458 19:32:31 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:16.985 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:16.985 19:32:33 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:16.985 19:32:33 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:16.985 19:32:33 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:16.985 19:32:33 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.985 19:32:33 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.360 00:03:18.360 real 0m3.898s 00:03:18.360 user 0m1.023s 00:03:18.360 sys 0m1.728s 00:03:18.360 19:32:35 setup.sh.acl.allowed -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:18.360 19:32:35 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:18.360 ************************************ 00:03:18.360 END TEST allowed 00:03:18.360 ************************************ 00:03:18.360 00:03:18.360 real 0m10.303s 00:03:18.360 user 0m3.099s 00:03:18.360 sys 0m5.216s 00:03:18.360 19:32:35 setup.sh.acl -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:18.360 19:32:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:18.360 ************************************ 00:03:18.360 END TEST acl 00:03:18.360 ************************************ 00:03:18.360 19:32:35 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.360 19:32:35 setup.sh -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:18.360 19:32:35 setup.sh -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:18.360 19:32:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.360 ************************************ 00:03:18.360 START TEST hugepages 00:03:18.360 ************************************ 00:03:18.360 19:32:35 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.360 * Looking for test storage... 00:03:18.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43849564 kB' 'MemAvailable: 47328704 kB' 'Buffers: 2704 kB' 'Cached: 10207892 kB' 'SwapCached: 0 kB' 'Active: 7184980 kB' 'Inactive: 3493852 kB' 'Active(anon): 6796076 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471804 kB' 'Mapped: 184836 kB' 'Shmem: 6327840 kB' 'KReclaimable: 180444 kB' 'Slab: 553280 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372836 kB' 'KernelStack: 12832 kB' 'PageTables: 8224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 7917472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196388 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.360 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.361 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:18.362 19:32:35 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:18.362 19:32:35 setup.sh.hugepages -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:18.362 19:32:35 setup.sh.hugepages -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:18.362 19:32:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.362 ************************************ 00:03:18.362 START TEST default_setup 00:03:18.362 ************************************ 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # default_setup 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.362 19:32:35 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.736 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:19.736 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:19.736 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:19.736 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:19.736 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:19.736 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:19.736 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:19.736 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:19.736 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:19.736 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:19.736 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:19.736 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:19.736 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:19.736 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:19.736 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:19.736 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:20.675 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45931436 kB' 'MemAvailable: 49410576 kB' 'Buffers: 2704 kB' 'Cached: 10207988 kB' 'SwapCached: 0 kB' 'Active: 7203632 kB' 'Inactive: 3493852 kB' 'Active(anon): 6814728 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490040 kB' 'Mapped: 185036 kB' 'Shmem: 6327936 kB' 'KReclaimable: 180444 kB' 'Slab: 552788 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372344 kB' 'KernelStack: 12688 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7938688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196340 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.676 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45931436 kB' 'MemAvailable: 49410576 kB' 'Buffers: 2704 kB' 'Cached: 10207992 kB' 'SwapCached: 0 kB' 'Active: 7203712 kB' 'Inactive: 3493852 kB' 'Active(anon): 6814808 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490156 kB' 'Mapped: 184940 kB' 'Shmem: 6327940 kB' 'KReclaimable: 180444 kB' 'Slab: 552748 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372304 kB' 'KernelStack: 12688 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7938708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196308 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.677 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.678 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:37 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45931664 kB' 'MemAvailable: 49410804 kB' 'Buffers: 2704 kB' 'Cached: 10208008 kB' 'SwapCached: 0 kB' 'Active: 7203640 kB' 'Inactive: 3493852 kB' 'Active(anon): 6814736 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490000 kB' 'Mapped: 184864 kB' 'Shmem: 6327956 kB' 'KReclaimable: 180444 kB' 'Slab: 552752 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372308 kB' 'KernelStack: 12688 kB' 'PageTables: 7992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7938728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196308 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.679 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.680 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:20.681 nr_hugepages=1024 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.681 resv_hugepages=0 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.681 surplus_hugepages=0 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.681 anon_hugepages=0 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.681 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45931496 kB' 'MemAvailable: 49410636 kB' 'Buffers: 2704 kB' 'Cached: 10208032 kB' 'SwapCached: 0 kB' 'Active: 7203720 kB' 'Inactive: 3493852 kB' 'Active(anon): 6814816 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490100 kB' 'Mapped: 184864 kB' 'Shmem: 6327980 kB' 'KReclaimable: 180444 kB' 'Slab: 552752 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372308 kB' 'KernelStack: 12720 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7939504 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196324 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.682 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.683 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20619492 kB' 'MemUsed: 12257448 kB' 'SwapCached: 0 kB' 'Active: 5667112 kB' 'Inactive: 3248472 kB' 'Active(anon): 5456376 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8614380 kB' 'Mapped: 148716 kB' 'AnonPages: 304328 kB' 'Shmem: 5155172 kB' 'KernelStack: 7816 kB' 'PageTables: 4848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115816 kB' 'Slab: 359792 kB' 'SReclaimable: 115816 kB' 'SUnreclaim: 243976 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.943 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.944 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:20.945 node0=1024 expecting 1024 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:20.945 00:03:20.945 real 0m2.428s 00:03:20.945 user 0m0.657s 00:03:20.945 sys 0m0.887s 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:20.945 19:32:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:20.945 ************************************ 00:03:20.945 END TEST default_setup 00:03:20.945 ************************************ 00:03:20.945 19:32:38 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:20.945 19:32:38 setup.sh.hugepages -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:20.945 19:32:38 setup.sh.hugepages -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:20.945 19:32:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.945 ************************************ 00:03:20.945 START TEST per_node_1G_alloc 00:03:20.945 ************************************ 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # per_node_1G_alloc 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.945 19:32:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.920 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.920 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.920 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.920 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.920 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.920 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.920 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.920 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.920 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.920 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.920 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.920 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.920 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.920 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.920 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.920 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.920 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45939096 kB' 'MemAvailable: 49418236 kB' 'Buffers: 2704 kB' 'Cached: 10208104 kB' 'SwapCached: 0 kB' 'Active: 7203992 kB' 'Inactive: 3493852 kB' 'Active(anon): 6815088 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490276 kB' 'Mapped: 184976 kB' 'Shmem: 6328052 kB' 'KReclaimable: 180444 kB' 'Slab: 552536 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372092 kB' 'KernelStack: 12688 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7938804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.183 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45939756 kB' 'MemAvailable: 49418896 kB' 'Buffers: 2704 kB' 'Cached: 10208108 kB' 'SwapCached: 0 kB' 'Active: 7203932 kB' 'Inactive: 3493852 kB' 'Active(anon): 6815028 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490244 kB' 'Mapped: 184956 kB' 'Shmem: 6328056 kB' 'KReclaimable: 180444 kB' 'Slab: 552504 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372060 kB' 'KernelStack: 12704 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7938824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.184 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.185 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45939888 kB' 'MemAvailable: 49419028 kB' 'Buffers: 2704 kB' 'Cached: 10208124 kB' 'SwapCached: 0 kB' 'Active: 7203828 kB' 'Inactive: 3493852 kB' 'Active(anon): 6814924 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490084 kB' 'Mapped: 184880 kB' 'Shmem: 6328072 kB' 'KReclaimable: 180444 kB' 'Slab: 552524 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372080 kB' 'KernelStack: 12720 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7938844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.186 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.187 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.188 nr_hugepages=1024 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.188 resv_hugepages=0 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.188 surplus_hugepages=0 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.188 anon_hugepages=0 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.188 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45939772 kB' 'MemAvailable: 49418912 kB' 'Buffers: 2704 kB' 'Cached: 10208144 kB' 'SwapCached: 0 kB' 'Active: 7203632 kB' 'Inactive: 3493852 kB' 'Active(anon): 6814728 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489860 kB' 'Mapped: 184880 kB' 'Shmem: 6328092 kB' 'KReclaimable: 180444 kB' 'Slab: 552524 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372080 kB' 'KernelStack: 12704 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7938872 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.189 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.190 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.450 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21679640 kB' 'MemUsed: 11197300 kB' 'SwapCached: 0 kB' 'Active: 5670964 kB' 'Inactive: 3248472 kB' 'Active(anon): 5460228 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8614388 kB' 'Mapped: 149172 kB' 'AnonPages: 308252 kB' 'Shmem: 5155180 kB' 'KernelStack: 7864 kB' 'PageTables: 4944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115816 kB' 'Slab: 359744 kB' 'SReclaimable: 115816 kB' 'SUnreclaim: 243928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.451 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24261196 kB' 'MemUsed: 3403592 kB' 'SwapCached: 0 kB' 'Active: 1536336 kB' 'Inactive: 245380 kB' 'Active(anon): 1358168 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1596504 kB' 'Mapped: 36296 kB' 'AnonPages: 185292 kB' 'Shmem: 1172956 kB' 'KernelStack: 4840 kB' 'PageTables: 2992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64628 kB' 'Slab: 192780 kB' 'SReclaimable: 64628 kB' 'SUnreclaim: 128152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.452 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.453 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:22.454 node0=512 expecting 512 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:22.454 node1=512 expecting 512 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:22.454 00:03:22.454 real 0m1.479s 00:03:22.454 user 0m0.602s 00:03:22.454 sys 0m0.841s 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:22.454 19:32:39 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:22.454 ************************************ 00:03:22.454 END TEST per_node_1G_alloc 00:03:22.454 ************************************ 00:03:22.454 19:32:39 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:22.454 19:32:39 setup.sh.hugepages -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:22.454 19:32:39 setup.sh.hugepages -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:22.454 19:32:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.454 ************************************ 00:03:22.454 START TEST even_2G_alloc 00:03:22.454 ************************************ 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # even_2G_alloc 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.454 19:32:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:23.388 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.388 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:23.648 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.648 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.648 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.648 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.648 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.648 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.648 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.648 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:23.648 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:23.648 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:23.648 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:23.648 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:23.648 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:23.648 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:23.648 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.648 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45924152 kB' 'MemAvailable: 49403292 kB' 'Buffers: 2704 kB' 'Cached: 10208236 kB' 'SwapCached: 0 kB' 'Active: 7204716 kB' 'Inactive: 3493852 kB' 'Active(anon): 6815812 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490940 kB' 'Mapped: 184992 kB' 'Shmem: 6328184 kB' 'KReclaimable: 180444 kB' 'Slab: 552656 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372212 kB' 'KernelStack: 12736 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7938696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.649 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45927640 kB' 'MemAvailable: 49406780 kB' 'Buffers: 2704 kB' 'Cached: 10208236 kB' 'SwapCached: 0 kB' 'Active: 7204448 kB' 'Inactive: 3493852 kB' 'Active(anon): 6815544 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490656 kB' 'Mapped: 184992 kB' 'Shmem: 6328184 kB' 'KReclaimable: 180444 kB' 'Slab: 552656 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372212 kB' 'KernelStack: 12720 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7938720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.650 19:32:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.650 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.651 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45927668 kB' 'MemAvailable: 49406808 kB' 'Buffers: 2704 kB' 'Cached: 10208260 kB' 'SwapCached: 0 kB' 'Active: 7204732 kB' 'Inactive: 3493852 kB' 'Active(anon): 6815828 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490856 kB' 'Mapped: 184968 kB' 'Shmem: 6328208 kB' 'KReclaimable: 180444 kB' 'Slab: 552656 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372212 kB' 'KernelStack: 12720 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7939108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.652 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.913 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.913 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.913 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.913 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.913 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.913 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.913 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.913 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.913 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.913 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.913 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.913 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.914 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.915 nr_hugepages=1024 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.915 resv_hugepages=0 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.915 surplus_hugepages=0 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.915 anon_hugepages=0 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45928032 kB' 'MemAvailable: 49407172 kB' 'Buffers: 2704 kB' 'Cached: 10208284 kB' 'SwapCached: 0 kB' 'Active: 7204580 kB' 'Inactive: 3493852 kB' 'Active(anon): 6815676 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490672 kB' 'Mapped: 184892 kB' 'Shmem: 6328232 kB' 'KReclaimable: 180444 kB' 'Slab: 552640 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372196 kB' 'KernelStack: 12704 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7939132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.915 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.916 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21668336 kB' 'MemUsed: 11208604 kB' 'SwapCached: 0 kB' 'Active: 5668380 kB' 'Inactive: 3248472 kB' 'Active(anon): 5457644 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8614452 kB' 'Mapped: 148748 kB' 'AnonPages: 305544 kB' 'Shmem: 5155244 kB' 'KernelStack: 7880 kB' 'PageTables: 5004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115816 kB' 'Slab: 359796 kB' 'SReclaimable: 115816 kB' 'SUnreclaim: 243980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.917 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.918 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24259444 kB' 'MemUsed: 3405344 kB' 'SwapCached: 0 kB' 'Active: 1536884 kB' 'Inactive: 245380 kB' 'Active(anon): 1358716 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1596556 kB' 'Mapped: 36144 kB' 'AnonPages: 185816 kB' 'Shmem: 1173008 kB' 'KernelStack: 4888 kB' 'PageTables: 3080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64628 kB' 'Slab: 192844 kB' 'SReclaimable: 64628 kB' 'SUnreclaim: 128216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.919 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:23.920 node0=512 expecting 512 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:23.920 node1=512 expecting 512 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:23.920 00:03:23.920 real 0m1.461s 00:03:23.920 user 0m0.625s 00:03:23.920 sys 0m0.790s 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:23.920 19:32:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:23.920 ************************************ 00:03:23.920 END TEST even_2G_alloc 00:03:23.920 ************************************ 00:03:23.920 19:32:41 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:23.920 19:32:41 setup.sh.hugepages -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:23.920 19:32:41 setup.sh.hugepages -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:23.920 19:32:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:23.920 ************************************ 00:03:23.920 START TEST odd_alloc 00:03:23.920 ************************************ 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # odd_alloc 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.920 19:32:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:25.298 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.298 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:25.298 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.298 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.298 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.298 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.298 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.298 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.298 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.298 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:25.298 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:25.298 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:25.298 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:25.298 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:25.298 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:25.298 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:25.298 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.298 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45912564 kB' 'MemAvailable: 49391704 kB' 'Buffers: 2704 kB' 'Cached: 10208368 kB' 'SwapCached: 0 kB' 'Active: 7203184 kB' 'Inactive: 3493852 kB' 'Active(anon): 6814280 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489060 kB' 'Mapped: 184136 kB' 'Shmem: 6328316 kB' 'KReclaimable: 180444 kB' 'Slab: 552456 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372012 kB' 'KernelStack: 13008 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 7928024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196820 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.299 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45912796 kB' 'MemAvailable: 49391936 kB' 'Buffers: 2704 kB' 'Cached: 10208372 kB' 'SwapCached: 0 kB' 'Active: 7202484 kB' 'Inactive: 3493852 kB' 'Active(anon): 6813580 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488360 kB' 'Mapped: 184068 kB' 'Shmem: 6328320 kB' 'KReclaimable: 180444 kB' 'Slab: 552436 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 371992 kB' 'KernelStack: 12912 kB' 'PageTables: 7980 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 7925684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.300 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.301 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45913204 kB' 'MemAvailable: 49392344 kB' 'Buffers: 2704 kB' 'Cached: 10208388 kB' 'SwapCached: 0 kB' 'Active: 7202020 kB' 'Inactive: 3493852 kB' 'Active(anon): 6813116 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487940 kB' 'Mapped: 184068 kB' 'Shmem: 6328336 kB' 'KReclaimable: 180444 kB' 'Slab: 552436 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 371992 kB' 'KernelStack: 12720 kB' 'PageTables: 7408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 7925704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.302 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.303 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:25.304 nr_hugepages=1025 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:25.304 resv_hugepages=0 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:25.304 surplus_hugepages=0 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:25.304 anon_hugepages=0 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45913548 kB' 'MemAvailable: 49392688 kB' 'Buffers: 2704 kB' 'Cached: 10208412 kB' 'SwapCached: 0 kB' 'Active: 7201648 kB' 'Inactive: 3493852 kB' 'Active(anon): 6812744 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487596 kB' 'Mapped: 184068 kB' 'Shmem: 6328360 kB' 'KReclaimable: 180444 kB' 'Slab: 552564 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372120 kB' 'KernelStack: 12688 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 7925724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.304 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.305 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21649200 kB' 'MemUsed: 11227740 kB' 'SwapCached: 0 kB' 'Active: 5667336 kB' 'Inactive: 3248472 kB' 'Active(anon): 5456600 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8614580 kB' 'Mapped: 147984 kB' 'AnonPages: 304368 kB' 'Shmem: 5155372 kB' 'KernelStack: 7864 kB' 'PageTables: 4864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115816 kB' 'Slab: 359768 kB' 'SReclaimable: 115816 kB' 'SUnreclaim: 243952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.306 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24264660 kB' 'MemUsed: 3400128 kB' 'SwapCached: 0 kB' 'Active: 1534352 kB' 'Inactive: 245380 kB' 'Active(anon): 1356184 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1596560 kB' 'Mapped: 36084 kB' 'AnonPages: 183228 kB' 'Shmem: 1173012 kB' 'KernelStack: 4824 kB' 'PageTables: 2808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64628 kB' 'Slab: 192796 kB' 'SReclaimable: 64628 kB' 'SUnreclaim: 128168 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.307 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.308 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:25.309 node0=512 expecting 513 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:25.309 node1=513 expecting 512 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:25.309 00:03:25.309 real 0m1.416s 00:03:25.309 user 0m0.559s 00:03:25.309 sys 0m0.821s 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:25.309 19:32:42 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:25.309 ************************************ 00:03:25.309 END TEST odd_alloc 00:03:25.309 ************************************ 00:03:25.309 19:32:42 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:25.309 19:32:42 setup.sh.hugepages -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:25.309 19:32:42 setup.sh.hugepages -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:25.309 19:32:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:25.309 ************************************ 00:03:25.309 START TEST custom_alloc 00:03:25.309 ************************************ 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # custom_alloc 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.309 19:32:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.686 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:26.686 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.686 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:26.686 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:26.686 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:26.686 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:26.686 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:26.686 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:26.686 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:26.686 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:26.686 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:26.686 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:26.686 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:26.686 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:26.686 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:26.686 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:26.686 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.686 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44856968 kB' 'MemAvailable: 48336108 kB' 'Buffers: 2704 kB' 'Cached: 10208504 kB' 'SwapCached: 0 kB' 'Active: 7201744 kB' 'Inactive: 3493852 kB' 'Active(anon): 6812840 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488012 kB' 'Mapped: 184136 kB' 'Shmem: 6328452 kB' 'KReclaimable: 180444 kB' 'Slab: 552492 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372048 kB' 'KernelStack: 12672 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 7925928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.687 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44857976 kB' 'MemAvailable: 48337116 kB' 'Buffers: 2704 kB' 'Cached: 10208508 kB' 'SwapCached: 0 kB' 'Active: 7202108 kB' 'Inactive: 3493852 kB' 'Active(anon): 6813204 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488044 kB' 'Mapped: 184160 kB' 'Shmem: 6328456 kB' 'KReclaimable: 180444 kB' 'Slab: 552564 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372120 kB' 'KernelStack: 12704 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 7925948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.688 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.689 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44858660 kB' 'MemAvailable: 48337800 kB' 'Buffers: 2704 kB' 'Cached: 10208508 kB' 'SwapCached: 0 kB' 'Active: 7201500 kB' 'Inactive: 3493852 kB' 'Active(anon): 6812596 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487376 kB' 'Mapped: 184080 kB' 'Shmem: 6328456 kB' 'KReclaimable: 180444 kB' 'Slab: 552540 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372096 kB' 'KernelStack: 12672 kB' 'PageTables: 7628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 7925968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.690 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.691 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:26.692 nr_hugepages=1536 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.692 resv_hugepages=0 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.692 surplus_hugepages=0 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.692 anon_hugepages=0 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44857904 kB' 'MemAvailable: 48337044 kB' 'Buffers: 2704 kB' 'Cached: 10208544 kB' 'SwapCached: 0 kB' 'Active: 7201888 kB' 'Inactive: 3493852 kB' 'Active(anon): 6812984 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487688 kB' 'Mapped: 184080 kB' 'Shmem: 6328492 kB' 'KReclaimable: 180444 kB' 'Slab: 552540 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372096 kB' 'KernelStack: 12688 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 7925988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.692 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.693 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.694 19:32:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21631436 kB' 'MemUsed: 11245504 kB' 'SwapCached: 0 kB' 'Active: 5667452 kB' 'Inactive: 3248472 kB' 'Active(anon): 5456716 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8614700 kB' 'Mapped: 147996 kB' 'AnonPages: 304400 kB' 'Shmem: 5155492 kB' 'KernelStack: 7864 kB' 'PageTables: 4864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115816 kB' 'Slab: 359828 kB' 'SReclaimable: 115816 kB' 'SUnreclaim: 244012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.694 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.695 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 23227960 kB' 'MemUsed: 4436828 kB' 'SwapCached: 0 kB' 'Active: 1534432 kB' 'Inactive: 245380 kB' 'Active(anon): 1356264 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1596572 kB' 'Mapped: 36084 kB' 'AnonPages: 183288 kB' 'Shmem: 1173024 kB' 'KernelStack: 4824 kB' 'PageTables: 2812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 64628 kB' 'Slab: 192712 kB' 'SReclaimable: 64628 kB' 'SUnreclaim: 128084 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.696 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.697 node0=512 expecting 512 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:26.697 node1=1024 expecting 1024 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:26.697 00:03:26.697 real 0m1.425s 00:03:26.697 user 0m0.613s 00:03:26.697 sys 0m0.762s 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:26.697 19:32:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.697 ************************************ 00:03:26.697 END TEST custom_alloc 00:03:26.697 ************************************ 00:03:26.955 19:32:44 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:26.955 19:32:44 setup.sh.hugepages -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:26.955 19:32:44 setup.sh.hugepages -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:26.955 19:32:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.955 ************************************ 00:03:26.955 START TEST no_shrink_alloc 00:03:26.955 ************************************ 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # no_shrink_alloc 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.955 19:32:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.888 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:27.888 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.888 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:27.888 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:27.888 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:27.888 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:27.888 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:27.888 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:27.888 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:27.888 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:27.888 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:27.888 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:27.888 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:27.888 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:27.888 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:27.888 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:27.888 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:28.150 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:28.150 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:28.150 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:28.150 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45888332 kB' 'MemAvailable: 49367472 kB' 'Buffers: 2704 kB' 'Cached: 10208640 kB' 'SwapCached: 0 kB' 'Active: 7207820 kB' 'Inactive: 3493852 kB' 'Active(anon): 6818916 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493588 kB' 'Mapped: 185172 kB' 'Shmem: 6328588 kB' 'KReclaimable: 180444 kB' 'Slab: 552784 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372340 kB' 'KernelStack: 12720 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7932472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196440 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.151 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45888484 kB' 'MemAvailable: 49367624 kB' 'Buffers: 2704 kB' 'Cached: 10208640 kB' 'SwapCached: 0 kB' 'Active: 7203456 kB' 'Inactive: 3493852 kB' 'Active(anon): 6814552 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 489280 kB' 'Mapped: 185172 kB' 'Shmem: 6328588 kB' 'KReclaimable: 180444 kB' 'Slab: 552764 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372320 kB' 'KernelStack: 12720 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7927988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.152 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.153 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45884660 kB' 'MemAvailable: 49363800 kB' 'Buffers: 2704 kB' 'Cached: 10208660 kB' 'SwapCached: 0 kB' 'Active: 7206384 kB' 'Inactive: 3493852 kB' 'Active(anon): 6817480 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492048 kB' 'Mapped: 184528 kB' 'Shmem: 6328608 kB' 'KReclaimable: 180444 kB' 'Slab: 552732 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372288 kB' 'KernelStack: 12640 kB' 'PageTables: 7544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7931312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196420 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.154 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.155 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:28.156 nr_hugepages=1024 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:28.156 resv_hugepages=0 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:28.156 surplus_hugepages=0 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:28.156 anon_hugepages=0 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.156 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45880376 kB' 'MemAvailable: 49359516 kB' 'Buffers: 2704 kB' 'Cached: 10208680 kB' 'SwapCached: 0 kB' 'Active: 7207888 kB' 'Inactive: 3493852 kB' 'Active(anon): 6818984 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493596 kB' 'Mapped: 184944 kB' 'Shmem: 6328628 kB' 'KReclaimable: 180444 kB' 'Slab: 552732 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372288 kB' 'KernelStack: 12688 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7932532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196424 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.157 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20577884 kB' 'MemUsed: 12299056 kB' 'SwapCached: 0 kB' 'Active: 5666848 kB' 'Inactive: 3248472 kB' 'Active(anon): 5456112 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8614752 kB' 'Mapped: 148272 kB' 'AnonPages: 303688 kB' 'Shmem: 5155544 kB' 'KernelStack: 7832 kB' 'PageTables: 4792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115816 kB' 'Slab: 359768 kB' 'SReclaimable: 115816 kB' 'SUnreclaim: 243952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.158 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.159 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:28.160 node0=1024 expecting 1024 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.160 19:32:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.094 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:29.094 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.094 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:29.094 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:29.094 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:29.094 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:29.356 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:29.356 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:29.356 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:29.356 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:29.356 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:29.356 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:29.356 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:29.356 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:29.356 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:29.356 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:29.356 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:29.356 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45905588 kB' 'MemAvailable: 49384728 kB' 'Buffers: 2704 kB' 'Cached: 10208744 kB' 'SwapCached: 0 kB' 'Active: 7202396 kB' 'Inactive: 3493852 kB' 'Active(anon): 6813492 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487996 kB' 'Mapped: 184192 kB' 'Shmem: 6328692 kB' 'KReclaimable: 180444 kB' 'Slab: 552628 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372184 kB' 'KernelStack: 12672 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7926460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.356 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.357 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45905592 kB' 'MemAvailable: 49384732 kB' 'Buffers: 2704 kB' 'Cached: 10208748 kB' 'SwapCached: 0 kB' 'Active: 7202344 kB' 'Inactive: 3493852 kB' 'Active(anon): 6813440 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 488100 kB' 'Mapped: 184172 kB' 'Shmem: 6328696 kB' 'KReclaimable: 180444 kB' 'Slab: 552696 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372252 kB' 'KernelStack: 12736 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7926476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.358 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.359 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45905872 kB' 'MemAvailable: 49385012 kB' 'Buffers: 2704 kB' 'Cached: 10208768 kB' 'SwapCached: 0 kB' 'Active: 7202084 kB' 'Inactive: 3493852 kB' 'Active(anon): 6813180 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487732 kB' 'Mapped: 184096 kB' 'Shmem: 6328716 kB' 'KReclaimable: 180444 kB' 'Slab: 552692 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372248 kB' 'KernelStack: 12720 kB' 'PageTables: 7628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7926500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.360 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.361 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.362 nr_hugepages=1024 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.362 resv_hugepages=0 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.362 surplus_hugepages=0 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.362 anon_hugepages=0 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45906288 kB' 'MemAvailable: 49385428 kB' 'Buffers: 2704 kB' 'Cached: 10208788 kB' 'SwapCached: 0 kB' 'Active: 7202260 kB' 'Inactive: 3493852 kB' 'Active(anon): 6813356 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487940 kB' 'Mapped: 184096 kB' 'Shmem: 6328736 kB' 'KReclaimable: 180444 kB' 'Slab: 552692 kB' 'SReclaimable: 180444 kB' 'SUnreclaim: 372248 kB' 'KernelStack: 12736 kB' 'PageTables: 7676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7926520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196452 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1889884 kB' 'DirectMap2M: 16904192 kB' 'DirectMap1G: 50331648 kB' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.362 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.363 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.363 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.363 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.363 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.363 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.363 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.363 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.363 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.363 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.363 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.363 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.363 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.363 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.623 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.624 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20603816 kB' 'MemUsed: 12273124 kB' 'SwapCached: 0 kB' 'Active: 5666632 kB' 'Inactive: 3248472 kB' 'Active(anon): 5455896 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8614752 kB' 'Mapped: 148012 kB' 'AnonPages: 303476 kB' 'Shmem: 5155544 kB' 'KernelStack: 7864 kB' 'PageTables: 4720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115816 kB' 'Slab: 359884 kB' 'SReclaimable: 115816 kB' 'SUnreclaim: 244068 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.625 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:29.626 node0=1024 expecting 1024 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:29.626 00:03:29.626 real 0m2.693s 00:03:29.626 user 0m1.100s 00:03:29.626 sys 0m1.500s 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:29.626 19:32:46 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:29.626 ************************************ 00:03:29.626 END TEST no_shrink_alloc 00:03:29.626 ************************************ 00:03:29.626 19:32:46 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:29.626 19:32:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:29.626 19:32:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.626 19:32:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.626 19:32:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.626 19:32:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.626 19:32:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.626 19:32:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.626 19:32:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.626 19:32:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.626 19:32:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.626 19:32:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:29.626 19:32:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:29.626 19:32:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:29.626 00:03:29.626 real 0m11.278s 00:03:29.626 user 0m4.318s 00:03:29.626 sys 0m5.835s 00:03:29.626 19:32:46 setup.sh.hugepages -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:29.626 19:32:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:29.626 ************************************ 00:03:29.626 END TEST hugepages 00:03:29.626 ************************************ 00:03:29.627 19:32:46 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:29.627 19:32:46 setup.sh -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:29.627 19:32:46 setup.sh -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:29.627 19:32:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:29.627 ************************************ 00:03:29.627 START TEST driver 00:03:29.627 ************************************ 00:03:29.627 19:32:46 setup.sh.driver -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:29.627 * Looking for test storage... 00:03:29.627 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:29.627 19:32:46 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:29.627 19:32:46 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.627 19:32:46 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.153 19:32:49 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:32.153 19:32:49 setup.sh.driver -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:32.153 19:32:49 setup.sh.driver -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:32.153 19:32:49 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:32.153 ************************************ 00:03:32.153 START TEST guess_driver 00:03:32.153 ************************************ 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # guess_driver 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:32.153 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:32.153 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:32.153 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:32.153 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:32.153 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:32.153 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:32.153 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:32.153 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:32.154 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:32.154 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:32.154 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:32.154 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:32.154 Looking for driver=vfio-pci 00:03:32.154 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:32.154 19:32:49 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:32.154 19:32:49 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.154 19:32:49 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:33.525 19:32:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.459 19:32:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:34.459 19:32:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:34.459 19:32:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.459 19:32:51 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:34.459 19:32:51 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:34.459 19:32:51 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.459 19:32:51 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.987 00:03:36.987 real 0m4.890s 00:03:36.987 user 0m1.086s 00:03:36.987 sys 0m1.893s 00:03:36.987 19:32:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:36.987 19:32:54 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:36.987 ************************************ 00:03:36.987 END TEST guess_driver 00:03:36.987 ************************************ 00:03:36.987 00:03:36.987 real 0m7.408s 00:03:36.987 user 0m1.668s 00:03:36.987 sys 0m2.835s 00:03:36.987 19:32:54 setup.sh.driver -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:36.987 19:32:54 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:36.987 ************************************ 00:03:36.987 END TEST driver 00:03:36.987 ************************************ 00:03:36.987 19:32:54 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:36.987 19:32:54 setup.sh -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:36.987 19:32:54 setup.sh -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:36.987 19:32:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:36.987 ************************************ 00:03:36.987 START TEST devices 00:03:36.987 ************************************ 00:03:36.987 19:32:54 setup.sh.devices -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:36.987 * Looking for test storage... 00:03:36.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:36.987 19:32:54 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:36.987 19:32:54 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:36.987 19:32:54 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.987 19:32:54 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.387 19:32:55 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:38.388 19:32:55 setup.sh.devices -- common/autotest_common.sh@1670 -- # zoned_devs=() 00:03:38.388 19:32:55 setup.sh.devices -- common/autotest_common.sh@1670 -- # local -gA zoned_devs 00:03:38.388 19:32:55 setup.sh.devices -- common/autotest_common.sh@1671 -- # local nvme bdf 00:03:38.388 19:32:55 setup.sh.devices -- common/autotest_common.sh@1673 -- # for nvme in /sys/block/nvme* 00:03:38.388 19:32:55 setup.sh.devices -- common/autotest_common.sh@1674 -- # is_block_zoned nvme0n1 00:03:38.388 19:32:55 setup.sh.devices -- common/autotest_common.sh@1663 -- # local device=nvme0n1 00:03:38.388 19:32:55 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:38.388 19:32:55 setup.sh.devices -- common/autotest_common.sh@1666 -- # [[ none != none ]] 00:03:38.388 19:32:55 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:38.388 19:32:55 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:38.388 19:32:55 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:38.388 19:32:55 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:38.388 19:32:55 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:38.388 19:32:55 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:38.388 19:32:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:38.388 19:32:55 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:38.388 19:32:55 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:38.388 19:32:55 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:38.388 19:32:55 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:38.388 19:32:55 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:38.388 19:32:55 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:38.646 No valid GPT data, bailing 00:03:38.646 19:32:55 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:38.646 19:32:55 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:38.646 19:32:55 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:38.646 19:32:55 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:38.646 19:32:55 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:38.646 19:32:55 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:38.646 19:32:55 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:38.646 19:32:55 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:38.646 19:32:55 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:38.646 19:32:55 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:38.646 19:32:55 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:38.646 19:32:55 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:38.646 19:32:55 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:38.646 19:32:55 setup.sh.devices -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:38.646 19:32:55 setup.sh.devices -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:38.646 19:32:55 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:38.646 ************************************ 00:03:38.646 START TEST nvme_mount 00:03:38.646 ************************************ 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # nvme_mount 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:38.646 19:32:55 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:39.581 Creating new GPT entries in memory. 00:03:39.581 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:39.582 other utilities. 00:03:39.582 19:32:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:39.582 19:32:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:39.582 19:32:56 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:39.582 19:32:56 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:39.582 19:32:56 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:40.515 Creating new GPT entries in memory. 00:03:40.515 The operation has completed successfully. 00:03:40.515 19:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:40.515 19:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.515 19:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1038709 00:03:40.515 19:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.515 19:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:40.515 19:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.515 19:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:40.515 19:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:40.773 19:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.773 19:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.773 19:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:40.773 19:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:40.773 19:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:40.773 19:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:40.773 19:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:40.773 19:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:40.773 19:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:40.773 19:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:40.773 19:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:40.773 19:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:40.773 19:32:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:40.773 19:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:40.773 19:32:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.707 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.708 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.708 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.708 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.708 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.708 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.708 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.708 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:41.708 19:32:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.966 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:41.966 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:41.966 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.966 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:41.966 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:41.966 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:41.966 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.966 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:41.966 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:41.966 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:41.966 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:41.966 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:41.966 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:42.224 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:42.225 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:42.225 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:42.225 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.225 19:32:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.598 19:33:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:44.532 19:33:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:44.790 19:33:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:44.790 19:33:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:44.790 19:33:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:44.790 19:33:02 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:44.790 19:33:02 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.790 19:33:02 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:44.790 19:33:02 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:44.790 19:33:02 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:44.790 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:44.790 00:03:44.790 real 0m6.236s 00:03:44.790 user 0m1.492s 00:03:44.790 sys 0m2.277s 00:03:44.790 19:33:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:44.790 19:33:02 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:44.790 ************************************ 00:03:44.790 END TEST nvme_mount 00:03:44.790 ************************************ 00:03:44.790 19:33:02 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:44.790 19:33:02 setup.sh.devices -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:03:44.790 19:33:02 setup.sh.devices -- common/autotest_common.sh@1108 -- # xtrace_disable 00:03:44.790 19:33:02 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:44.790 ************************************ 00:03:44.790 START TEST dm_mount 00:03:44.790 ************************************ 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # dm_mount 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:44.790 19:33:02 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:46.165 Creating new GPT entries in memory. 00:03:46.165 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:46.165 other utilities. 00:03:46.165 19:33:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:46.165 19:33:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.165 19:33:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:46.165 19:33:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:46.165 19:33:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:47.099 Creating new GPT entries in memory. 00:03:47.099 The operation has completed successfully. 00:03:47.099 19:33:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:47.099 19:33:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:47.099 19:33:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:47.099 19:33:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:47.099 19:33:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:48.033 The operation has completed successfully. 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1041206 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.033 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:48.034 19:33:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:48.034 19:33:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.034 19:33:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:48.967 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.225 19:33:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.159 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:50.417 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:50.417 00:03:50.417 real 0m5.663s 00:03:50.417 user 0m0.964s 00:03:50.417 sys 0m1.567s 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:50.417 19:33:07 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:50.417 ************************************ 00:03:50.417 END TEST dm_mount 00:03:50.417 ************************************ 00:03:50.417 19:33:07 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:50.417 19:33:07 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:50.417 19:33:07 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:50.417 19:33:07 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.417 19:33:07 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:50.417 19:33:07 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.417 19:33:07 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:50.674 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:50.674 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:50.674 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:50.674 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:50.674 19:33:08 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:50.674 19:33:08 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:50.931 19:33:08 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:50.931 19:33:08 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:50.931 19:33:08 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:50.931 19:33:08 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:50.931 19:33:08 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:50.931 00:03:50.931 real 0m13.752s 00:03:50.931 user 0m3.088s 00:03:50.931 sys 0m4.833s 00:03:50.931 19:33:08 setup.sh.devices -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:50.931 19:33:08 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:50.931 ************************************ 00:03:50.931 END TEST devices 00:03:50.931 ************************************ 00:03:50.931 00:03:50.931 real 0m42.981s 00:03:50.931 user 0m12.262s 00:03:50.931 sys 0m18.884s 00:03:50.931 19:33:08 setup.sh -- common/autotest_common.sh@1127 -- # xtrace_disable 00:03:50.931 19:33:08 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:50.931 ************************************ 00:03:50.931 END TEST setup.sh 00:03:50.931 ************************************ 00:03:50.931 19:33:08 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:51.864 Hugepages 00:03:51.864 node hugesize free / total 00:03:51.864 node0 1048576kB 0 / 0 00:03:51.864 node0 2048kB 2048 / 2048 00:03:51.864 node1 1048576kB 0 / 0 00:03:51.864 node1 2048kB 0 / 0 00:03:51.864 00:03:51.864 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:51.864 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:51.864 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:51.864 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:51.864 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:51.864 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:51.864 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:51.864 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:51.864 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:51.864 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:51.864 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:51.864 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:51.864 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:51.864 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:51.864 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:51.864 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:51.864 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:52.122 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:52.122 19:33:09 -- spdk/autotest.sh@130 -- # uname -s 00:03:52.122 19:33:09 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:52.122 19:33:09 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:52.122 19:33:09 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:53.056 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:53.056 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:53.056 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:53.056 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:53.056 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:53.315 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:53.315 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:53.315 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:53.315 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:53.315 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:53.315 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:53.315 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:53.315 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:53.315 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:53.315 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:53.315 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:54.249 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:54.249 19:33:11 -- common/autotest_common.sh@1533 -- # sleep 1 00:03:55.622 19:33:12 -- common/autotest_common.sh@1534 -- # bdfs=() 00:03:55.622 19:33:12 -- common/autotest_common.sh@1534 -- # local bdfs 00:03:55.622 19:33:12 -- common/autotest_common.sh@1535 -- # bdfs=($(get_nvme_bdfs)) 00:03:55.622 19:33:12 -- common/autotest_common.sh@1535 -- # get_nvme_bdfs 00:03:55.622 19:33:12 -- common/autotest_common.sh@1514 -- # bdfs=() 00:03:55.622 19:33:12 -- common/autotest_common.sh@1514 -- # local bdfs 00:03:55.622 19:33:12 -- common/autotest_common.sh@1515 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:55.622 19:33:12 -- common/autotest_common.sh@1515 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:55.622 19:33:12 -- common/autotest_common.sh@1515 -- # jq -r '.config[].params.traddr' 00:03:55.622 19:33:12 -- common/autotest_common.sh@1516 -- # (( 1 == 0 )) 00:03:55.622 19:33:12 -- common/autotest_common.sh@1520 -- # printf '%s\n' 0000:88:00.0 00:03:55.622 19:33:12 -- common/autotest_common.sh@1537 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.551 Waiting for block devices as requested 00:03:56.551 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:56.551 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:56.551 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:56.833 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:56.833 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:56.833 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:57.091 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:57.091 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:57.091 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:57.091 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:57.091 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:57.349 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:57.349 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:57.349 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:57.349 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:57.606 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:57.606 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:57.606 19:33:14 -- common/autotest_common.sh@1539 -- # for bdf in "${bdfs[@]}" 00:03:57.606 19:33:14 -- common/autotest_common.sh@1540 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:57.606 19:33:14 -- common/autotest_common.sh@1503 -- # readlink -f /sys/class/nvme/nvme0 00:03:57.606 19:33:14 -- common/autotest_common.sh@1503 -- # grep 0000:88:00.0/nvme/nvme 00:03:57.863 19:33:14 -- common/autotest_common.sh@1503 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:57.863 19:33:14 -- common/autotest_common.sh@1504 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:57.863 19:33:14 -- common/autotest_common.sh@1508 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:57.863 19:33:14 -- common/autotest_common.sh@1508 -- # printf '%s\n' nvme0 00:03:57.863 19:33:14 -- common/autotest_common.sh@1540 -- # nvme_ctrlr=/dev/nvme0 00:03:57.863 19:33:14 -- common/autotest_common.sh@1541 -- # [[ -z /dev/nvme0 ]] 00:03:57.863 19:33:14 -- common/autotest_common.sh@1546 -- # nvme id-ctrl /dev/nvme0 00:03:57.863 19:33:14 -- common/autotest_common.sh@1546 -- # grep oacs 00:03:57.863 19:33:14 -- common/autotest_common.sh@1546 -- # cut -d: -f2 00:03:57.863 19:33:14 -- common/autotest_common.sh@1546 -- # oacs=' 0xf' 00:03:57.863 19:33:14 -- common/autotest_common.sh@1547 -- # oacs_ns_manage=8 00:03:57.863 19:33:14 -- common/autotest_common.sh@1549 -- # [[ 8 -ne 0 ]] 00:03:57.863 19:33:14 -- common/autotest_common.sh@1555 -- # nvme id-ctrl /dev/nvme0 00:03:57.863 19:33:14 -- common/autotest_common.sh@1555 -- # grep unvmcap 00:03:57.863 19:33:14 -- common/autotest_common.sh@1555 -- # cut -d: -f2 00:03:57.863 19:33:15 -- common/autotest_common.sh@1555 -- # unvmcap=' 0' 00:03:57.863 19:33:15 -- common/autotest_common.sh@1556 -- # [[ 0 -eq 0 ]] 00:03:57.863 19:33:15 -- common/autotest_common.sh@1558 -- # continue 00:03:57.863 19:33:15 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:57.863 19:33:15 -- common/autotest_common.sh@731 -- # xtrace_disable 00:03:57.863 19:33:15 -- common/autotest_common.sh@10 -- # set +x 00:03:57.863 19:33:15 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:57.863 19:33:15 -- common/autotest_common.sh@725 -- # xtrace_disable 00:03:57.863 19:33:15 -- common/autotest_common.sh@10 -- # set +x 00:03:57.863 19:33:15 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.794 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:58.794 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:58.794 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:59.051 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:59.051 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:59.051 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:59.051 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:59.051 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:59.051 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:59.051 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:59.051 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:59.051 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:59.051 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:59.051 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:59.051 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:59.051 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:59.984 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:59.984 19:33:17 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:59.984 19:33:17 -- common/autotest_common.sh@731 -- # xtrace_disable 00:03:59.984 19:33:17 -- common/autotest_common.sh@10 -- # set +x 00:03:59.984 19:33:17 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:59.984 19:33:17 -- common/autotest_common.sh@1592 -- # mapfile -t bdfs 00:03:59.984 19:33:17 -- common/autotest_common.sh@1592 -- # get_nvme_bdfs_by_id 0x0a54 00:03:59.984 19:33:17 -- common/autotest_common.sh@1578 -- # bdfs=() 00:03:59.984 19:33:17 -- common/autotest_common.sh@1578 -- # local bdfs 00:03:59.984 19:33:17 -- common/autotest_common.sh@1580 -- # get_nvme_bdfs 00:03:59.984 19:33:17 -- common/autotest_common.sh@1514 -- # bdfs=() 00:03:59.984 19:33:17 -- common/autotest_common.sh@1514 -- # local bdfs 00:03:59.984 19:33:17 -- common/autotest_common.sh@1515 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:59.984 19:33:17 -- common/autotest_common.sh@1515 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:59.984 19:33:17 -- common/autotest_common.sh@1515 -- # jq -r '.config[].params.traddr' 00:04:00.242 19:33:17 -- common/autotest_common.sh@1516 -- # (( 1 == 0 )) 00:04:00.242 19:33:17 -- common/autotest_common.sh@1520 -- # printf '%s\n' 0000:88:00.0 00:04:00.242 19:33:17 -- common/autotest_common.sh@1580 -- # for bdf in $(get_nvme_bdfs) 00:04:00.242 19:33:17 -- common/autotest_common.sh@1581 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:04:00.242 19:33:17 -- common/autotest_common.sh@1581 -- # device=0x0a54 00:04:00.242 19:33:17 -- common/autotest_common.sh@1582 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:00.242 19:33:17 -- common/autotest_common.sh@1583 -- # bdfs+=($bdf) 00:04:00.242 19:33:17 -- common/autotest_common.sh@1587 -- # printf '%s\n' 0000:88:00.0 00:04:00.242 19:33:17 -- common/autotest_common.sh@1593 -- # [[ -z 0000:88:00.0 ]] 00:04:00.242 19:33:17 -- common/autotest_common.sh@1598 -- # spdk_tgt_pid=1046887 00:04:00.242 19:33:17 -- common/autotest_common.sh@1597 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.242 19:33:17 -- common/autotest_common.sh@1599 -- # waitforlisten 1046887 00:04:00.242 19:33:17 -- common/autotest_common.sh@832 -- # '[' -z 1046887 ']' 00:04:00.242 19:33:17 -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.242 19:33:17 -- common/autotest_common.sh@837 -- # local max_retries=100 00:04:00.242 19:33:17 -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.242 19:33:17 -- common/autotest_common.sh@841 -- # xtrace_disable 00:04:00.242 19:33:17 -- common/autotest_common.sh@10 -- # set +x 00:04:00.242 [2024-07-24 19:33:17.467123] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:00.242 [2024-07-24 19:33:17.467225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1046887 ] 00:04:00.242 EAL: No free 2048 kB hugepages reported on node 1 00:04:00.242 [2024-07-24 19:33:17.530140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.500 [2024-07-24 19:33:17.645460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.063 19:33:18 -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:04:01.063 19:33:18 -- common/autotest_common.sh@865 -- # return 0 00:04:01.063 19:33:18 -- common/autotest_common.sh@1601 -- # bdf_id=0 00:04:01.063 19:33:18 -- common/autotest_common.sh@1602 -- # for bdf in "${bdfs[@]}" 00:04:01.063 19:33:18 -- common/autotest_common.sh@1603 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:04:04.341 nvme0n1 00:04:04.341 19:33:21 -- common/autotest_common.sh@1605 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:04.598 [2024-07-24 19:33:21.724373] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:04.598 [2024-07-24 19:33:21.724419] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:04.598 request: 00:04:04.598 { 00:04:04.598 "nvme_ctrlr_name": "nvme0", 00:04:04.598 "password": "test", 00:04:04.598 "method": "bdev_nvme_opal_revert", 00:04:04.598 "req_id": 1 00:04:04.598 } 00:04:04.598 Got JSON-RPC error response 00:04:04.598 response: 00:04:04.598 { 00:04:04.598 "code": -32603, 00:04:04.598 "message": "Internal error" 00:04:04.598 } 00:04:04.598 19:33:21 -- common/autotest_common.sh@1605 -- # true 00:04:04.598 19:33:21 -- common/autotest_common.sh@1606 -- # (( ++bdf_id )) 00:04:04.598 19:33:21 -- common/autotest_common.sh@1609 -- # killprocess 1046887 00:04:04.598 19:33:21 -- common/autotest_common.sh@951 -- # '[' -z 1046887 ']' 00:04:04.598 19:33:21 -- common/autotest_common.sh@955 -- # kill -0 1046887 00:04:04.598 19:33:21 -- common/autotest_common.sh@956 -- # uname 00:04:04.598 19:33:21 -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:04:04.598 19:33:21 -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1046887 00:04:04.598 19:33:21 -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:04:04.598 19:33:21 -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:04:04.598 19:33:21 -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1046887' 00:04:04.598 killing process with pid 1046887 00:04:04.598 19:33:21 -- common/autotest_common.sh@970 -- # kill 1046887 00:04:04.598 19:33:21 -- common/autotest_common.sh@975 -- # wait 1046887 00:04:06.491 19:33:23 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:06.491 19:33:23 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:06.491 19:33:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:06.491 19:33:23 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:06.491 19:33:23 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:06.491 19:33:23 -- common/autotest_common.sh@725 -- # xtrace_disable 00:04:06.491 19:33:23 -- common/autotest_common.sh@10 -- # set +x 00:04:06.491 19:33:23 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:06.491 19:33:23 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:06.491 19:33:23 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:06.491 19:33:23 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:06.491 19:33:23 -- common/autotest_common.sh@10 -- # set +x 00:04:06.491 ************************************ 00:04:06.491 START TEST env 00:04:06.491 ************************************ 00:04:06.491 19:33:23 env -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:06.491 * Looking for test storage... 00:04:06.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:06.491 19:33:23 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:06.491 19:33:23 env -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:06.491 19:33:23 env -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:06.491 19:33:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.491 ************************************ 00:04:06.491 START TEST env_memory 00:04:06.491 ************************************ 00:04:06.491 19:33:23 env.env_memory -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:06.491 00:04:06.491 00:04:06.491 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.491 http://cunit.sourceforge.net/ 00:04:06.491 00:04:06.491 00:04:06.491 Suite: memory 00:04:06.491 Test: alloc and free memory map ...[2024-07-24 19:33:23.699615] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:06.491 passed 00:04:06.491 Test: mem map translation ...[2024-07-24 19:33:23.719910] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:06.491 [2024-07-24 19:33:23.719930] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:06.491 [2024-07-24 19:33:23.719986] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:06.491 [2024-07-24 19:33:23.719998] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:06.491 passed 00:04:06.491 Test: mem map registration ...[2024-07-24 19:33:23.760339] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:06.491 [2024-07-24 19:33:23.760357] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:06.491 passed 00:04:06.491 Test: mem map adjacent registrations ...passed 00:04:06.491 00:04:06.491 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.491 suites 1 1 n/a 0 0 00:04:06.491 tests 4 4 4 0 0 00:04:06.491 asserts 152 152 152 0 n/a 00:04:06.491 00:04:06.491 Elapsed time = 0.140 seconds 00:04:06.491 00:04:06.491 real 0m0.149s 00:04:06.491 user 0m0.145s 00:04:06.491 sys 0m0.004s 00:04:06.491 19:33:23 env.env_memory -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:06.491 19:33:23 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:06.491 ************************************ 00:04:06.491 END TEST env_memory 00:04:06.491 ************************************ 00:04:06.491 19:33:23 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:06.491 19:33:23 env -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:06.491 19:33:23 env -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:06.491 19:33:23 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.491 ************************************ 00:04:06.491 START TEST env_vtophys 00:04:06.491 ************************************ 00:04:06.491 19:33:23 env.env_vtophys -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:06.749 EAL: lib.eal log level changed from notice to debug 00:04:06.749 EAL: Detected lcore 0 as core 0 on socket 0 00:04:06.749 EAL: Detected lcore 1 as core 1 on socket 0 00:04:06.749 EAL: Detected lcore 2 as core 2 on socket 0 00:04:06.749 EAL: Detected lcore 3 as core 3 on socket 0 00:04:06.749 EAL: Detected lcore 4 as core 4 on socket 0 00:04:06.749 EAL: Detected lcore 5 as core 5 on socket 0 00:04:06.749 EAL: Detected lcore 6 as core 8 on socket 0 00:04:06.749 EAL: Detected lcore 7 as core 9 on socket 0 00:04:06.749 EAL: Detected lcore 8 as core 10 on socket 0 00:04:06.749 EAL: Detected lcore 9 as core 11 on socket 0 00:04:06.749 EAL: Detected lcore 10 as core 12 on socket 0 00:04:06.749 EAL: Detected lcore 11 as core 13 on socket 0 00:04:06.749 EAL: Detected lcore 12 as core 0 on socket 1 00:04:06.749 EAL: Detected lcore 13 as core 1 on socket 1 00:04:06.749 EAL: Detected lcore 14 as core 2 on socket 1 00:04:06.749 EAL: Detected lcore 15 as core 3 on socket 1 00:04:06.749 EAL: Detected lcore 16 as core 4 on socket 1 00:04:06.749 EAL: Detected lcore 17 as core 5 on socket 1 00:04:06.749 EAL: Detected lcore 18 as core 8 on socket 1 00:04:06.749 EAL: Detected lcore 19 as core 9 on socket 1 00:04:06.749 EAL: Detected lcore 20 as core 10 on socket 1 00:04:06.749 EAL: Detected lcore 21 as core 11 on socket 1 00:04:06.749 EAL: Detected lcore 22 as core 12 on socket 1 00:04:06.749 EAL: Detected lcore 23 as core 13 on socket 1 00:04:06.749 EAL: Detected lcore 24 as core 0 on socket 0 00:04:06.749 EAL: Detected lcore 25 as core 1 on socket 0 00:04:06.749 EAL: Detected lcore 26 as core 2 on socket 0 00:04:06.749 EAL: Detected lcore 27 as core 3 on socket 0 00:04:06.749 EAL: Detected lcore 28 as core 4 on socket 0 00:04:06.749 EAL: Detected lcore 29 as core 5 on socket 0 00:04:06.749 EAL: Detected lcore 30 as core 8 on socket 0 00:04:06.749 EAL: Detected lcore 31 as core 9 on socket 0 00:04:06.749 EAL: Detected lcore 32 as core 10 on socket 0 00:04:06.749 EAL: Detected lcore 33 as core 11 on socket 0 00:04:06.749 EAL: Detected lcore 34 as core 12 on socket 0 00:04:06.749 EAL: Detected lcore 35 as core 13 on socket 0 00:04:06.749 EAL: Detected lcore 36 as core 0 on socket 1 00:04:06.749 EAL: Detected lcore 37 as core 1 on socket 1 00:04:06.749 EAL: Detected lcore 38 as core 2 on socket 1 00:04:06.749 EAL: Detected lcore 39 as core 3 on socket 1 00:04:06.749 EAL: Detected lcore 40 as core 4 on socket 1 00:04:06.749 EAL: Detected lcore 41 as core 5 on socket 1 00:04:06.749 EAL: Detected lcore 42 as core 8 on socket 1 00:04:06.749 EAL: Detected lcore 43 as core 9 on socket 1 00:04:06.749 EAL: Detected lcore 44 as core 10 on socket 1 00:04:06.749 EAL: Detected lcore 45 as core 11 on socket 1 00:04:06.749 EAL: Detected lcore 46 as core 12 on socket 1 00:04:06.749 EAL: Detected lcore 47 as core 13 on socket 1 00:04:06.749 EAL: Maximum logical cores by configuration: 128 00:04:06.749 EAL: Detected CPU lcores: 48 00:04:06.749 EAL: Detected NUMA nodes: 2 00:04:06.749 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:06.749 EAL: Detected shared linkage of DPDK 00:04:06.749 EAL: No shared files mode enabled, IPC will be disabled 00:04:06.749 EAL: Bus pci wants IOVA as 'DC' 00:04:06.749 EAL: Buses did not request a specific IOVA mode. 00:04:06.749 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:06.749 EAL: Selected IOVA mode 'VA' 00:04:06.749 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.749 EAL: Probing VFIO support... 00:04:06.749 EAL: IOMMU type 1 (Type 1) is supported 00:04:06.749 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:06.749 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:06.749 EAL: VFIO support initialized 00:04:06.749 EAL: Ask a virtual area of 0x2e000 bytes 00:04:06.749 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:06.749 EAL: Setting up physically contiguous memory... 00:04:06.749 EAL: Setting maximum number of open files to 524288 00:04:06.749 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:06.749 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:06.749 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:06.749 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.749 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:06.749 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.749 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.749 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:06.749 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:06.749 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.749 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:06.749 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.749 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.749 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:06.749 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:06.749 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.749 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:06.749 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.749 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.749 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:06.749 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:06.749 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.749 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:06.749 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:06.749 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.749 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:06.749 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:06.749 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:06.749 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.749 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:06.749 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:06.749 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.749 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:06.749 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:06.749 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.749 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:06.749 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:06.749 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.749 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:06.749 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:06.749 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.749 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:06.749 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:06.749 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.749 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:06.749 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:06.749 EAL: Ask a virtual area of 0x61000 bytes 00:04:06.749 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:06.749 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:06.749 EAL: Ask a virtual area of 0x400000000 bytes 00:04:06.749 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:06.749 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:06.749 EAL: Hugepages will be freed exactly as allocated. 00:04:06.749 EAL: No shared files mode enabled, IPC is disabled 00:04:06.749 EAL: No shared files mode enabled, IPC is disabled 00:04:06.749 EAL: TSC frequency is ~2700000 KHz 00:04:06.749 EAL: Main lcore 0 is ready (tid=7f297fee6a00;cpuset=[0]) 00:04:06.749 EAL: Trying to obtain current memory policy. 00:04:06.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.749 EAL: Restoring previous memory policy: 0 00:04:06.749 EAL: request: mp_malloc_sync 00:04:06.749 EAL: No shared files mode enabled, IPC is disabled 00:04:06.749 EAL: Heap on socket 0 was expanded by 2MB 00:04:06.749 EAL: No shared files mode enabled, IPC is disabled 00:04:06.749 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:06.749 EAL: Mem event callback 'spdk:(nil)' registered 00:04:06.749 00:04:06.749 00:04:06.749 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.749 http://cunit.sourceforge.net/ 00:04:06.749 00:04:06.749 00:04:06.749 Suite: components_suite 00:04:06.749 Test: vtophys_malloc_test ...passed 00:04:06.749 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:06.749 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.749 EAL: Restoring previous memory policy: 4 00:04:06.749 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.749 EAL: request: mp_malloc_sync 00:04:06.749 EAL: No shared files mode enabled, IPC is disabled 00:04:06.749 EAL: Heap on socket 0 was expanded by 4MB 00:04:06.749 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.750 EAL: request: mp_malloc_sync 00:04:06.750 EAL: No shared files mode enabled, IPC is disabled 00:04:06.750 EAL: Heap on socket 0 was shrunk by 4MB 00:04:06.750 EAL: Trying to obtain current memory policy. 00:04:06.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.750 EAL: Restoring previous memory policy: 4 00:04:06.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.750 EAL: request: mp_malloc_sync 00:04:06.750 EAL: No shared files mode enabled, IPC is disabled 00:04:06.750 EAL: Heap on socket 0 was expanded by 6MB 00:04:06.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.750 EAL: request: mp_malloc_sync 00:04:06.750 EAL: No shared files mode enabled, IPC is disabled 00:04:06.750 EAL: Heap on socket 0 was shrunk by 6MB 00:04:06.750 EAL: Trying to obtain current memory policy. 00:04:06.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.750 EAL: Restoring previous memory policy: 4 00:04:06.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.750 EAL: request: mp_malloc_sync 00:04:06.750 EAL: No shared files mode enabled, IPC is disabled 00:04:06.750 EAL: Heap on socket 0 was expanded by 10MB 00:04:06.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.750 EAL: request: mp_malloc_sync 00:04:06.750 EAL: No shared files mode enabled, IPC is disabled 00:04:06.750 EAL: Heap on socket 0 was shrunk by 10MB 00:04:06.750 EAL: Trying to obtain current memory policy. 00:04:06.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.750 EAL: Restoring previous memory policy: 4 00:04:06.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.750 EAL: request: mp_malloc_sync 00:04:06.750 EAL: No shared files mode enabled, IPC is disabled 00:04:06.750 EAL: Heap on socket 0 was expanded by 18MB 00:04:06.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.750 EAL: request: mp_malloc_sync 00:04:06.750 EAL: No shared files mode enabled, IPC is disabled 00:04:06.750 EAL: Heap on socket 0 was shrunk by 18MB 00:04:06.750 EAL: Trying to obtain current memory policy. 00:04:06.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.750 EAL: Restoring previous memory policy: 4 00:04:06.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.750 EAL: request: mp_malloc_sync 00:04:06.750 EAL: No shared files mode enabled, IPC is disabled 00:04:06.750 EAL: Heap on socket 0 was expanded by 34MB 00:04:06.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.750 EAL: request: mp_malloc_sync 00:04:06.750 EAL: No shared files mode enabled, IPC is disabled 00:04:06.750 EAL: Heap on socket 0 was shrunk by 34MB 00:04:06.750 EAL: Trying to obtain current memory policy. 00:04:06.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.750 EAL: Restoring previous memory policy: 4 00:04:06.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.750 EAL: request: mp_malloc_sync 00:04:06.750 EAL: No shared files mode enabled, IPC is disabled 00:04:06.750 EAL: Heap on socket 0 was expanded by 66MB 00:04:06.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.750 EAL: request: mp_malloc_sync 00:04:06.750 EAL: No shared files mode enabled, IPC is disabled 00:04:06.750 EAL: Heap on socket 0 was shrunk by 66MB 00:04:06.750 EAL: Trying to obtain current memory policy. 00:04:06.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:06.750 EAL: Restoring previous memory policy: 4 00:04:06.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.750 EAL: request: mp_malloc_sync 00:04:06.750 EAL: No shared files mode enabled, IPC is disabled 00:04:06.750 EAL: Heap on socket 0 was expanded by 130MB 00:04:06.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.750 EAL: request: mp_malloc_sync 00:04:06.750 EAL: No shared files mode enabled, IPC is disabled 00:04:06.750 EAL: Heap on socket 0 was shrunk by 130MB 00:04:06.750 EAL: Trying to obtain current memory policy. 00:04:06.750 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.007 EAL: Restoring previous memory policy: 4 00:04:07.007 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.007 EAL: request: mp_malloc_sync 00:04:07.007 EAL: No shared files mode enabled, IPC is disabled 00:04:07.007 EAL: Heap on socket 0 was expanded by 258MB 00:04:07.007 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.007 EAL: request: mp_malloc_sync 00:04:07.007 EAL: No shared files mode enabled, IPC is disabled 00:04:07.007 EAL: Heap on socket 0 was shrunk by 258MB 00:04:07.007 EAL: Trying to obtain current memory policy. 00:04:07.007 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.264 EAL: Restoring previous memory policy: 4 00:04:07.265 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.265 EAL: request: mp_malloc_sync 00:04:07.265 EAL: No shared files mode enabled, IPC is disabled 00:04:07.265 EAL: Heap on socket 0 was expanded by 514MB 00:04:07.265 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.265 EAL: request: mp_malloc_sync 00:04:07.265 EAL: No shared files mode enabled, IPC is disabled 00:04:07.265 EAL: Heap on socket 0 was shrunk by 514MB 00:04:07.265 EAL: Trying to obtain current memory policy. 00:04:07.265 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.829 EAL: Restoring previous memory policy: 4 00:04:07.829 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.829 EAL: request: mp_malloc_sync 00:04:07.829 EAL: No shared files mode enabled, IPC is disabled 00:04:07.829 EAL: Heap on socket 0 was expanded by 1026MB 00:04:07.829 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.087 EAL: request: mp_malloc_sync 00:04:08.087 EAL: No shared files mode enabled, IPC is disabled 00:04:08.087 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:08.087 passed 00:04:08.087 00:04:08.087 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.087 suites 1 1 n/a 0 0 00:04:08.087 tests 2 2 2 0 0 00:04:08.087 asserts 497 497 497 0 n/a 00:04:08.087 00:04:08.087 Elapsed time = 1.364 seconds 00:04:08.087 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.087 EAL: request: mp_malloc_sync 00:04:08.087 EAL: No shared files mode enabled, IPC is disabled 00:04:08.087 EAL: Heap on socket 0 was shrunk by 2MB 00:04:08.087 EAL: No shared files mode enabled, IPC is disabled 00:04:08.087 EAL: No shared files mode enabled, IPC is disabled 00:04:08.087 EAL: No shared files mode enabled, IPC is disabled 00:04:08.087 00:04:08.087 real 0m1.487s 00:04:08.087 user 0m0.851s 00:04:08.087 sys 0m0.600s 00:04:08.087 19:33:25 env.env_vtophys -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:08.087 19:33:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:08.087 ************************************ 00:04:08.087 END TEST env_vtophys 00:04:08.087 ************************************ 00:04:08.087 19:33:25 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:08.087 19:33:25 env -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:08.087 19:33:25 env -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:08.087 19:33:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.087 ************************************ 00:04:08.087 START TEST env_pci 00:04:08.087 ************************************ 00:04:08.087 19:33:25 env.env_pci -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:08.087 00:04:08.087 00:04:08.087 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.087 http://cunit.sourceforge.net/ 00:04:08.087 00:04:08.087 00:04:08.087 Suite: pci 00:04:08.087 Test: pci_hook ...[2024-07-24 19:33:25.406463] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1047901 has claimed it 00:04:08.087 EAL: Cannot find device (10000:00:01.0) 00:04:08.087 EAL: Failed to attach device on primary process 00:04:08.087 passed 00:04:08.087 00:04:08.087 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.087 suites 1 1 n/a 0 0 00:04:08.087 tests 1 1 1 0 0 00:04:08.087 asserts 25 25 25 0 n/a 00:04:08.087 00:04:08.087 Elapsed time = 0.022 seconds 00:04:08.087 00:04:08.087 real 0m0.033s 00:04:08.087 user 0m0.008s 00:04:08.087 sys 0m0.025s 00:04:08.087 19:33:25 env.env_pci -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:08.087 19:33:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:08.087 ************************************ 00:04:08.087 END TEST env_pci 00:04:08.087 ************************************ 00:04:08.087 19:33:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:08.087 19:33:25 env -- env/env.sh@15 -- # uname 00:04:08.087 19:33:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:08.087 19:33:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:08.087 19:33:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:08.087 19:33:25 env -- common/autotest_common.sh@1102 -- # '[' 5 -le 1 ']' 00:04:08.087 19:33:25 env -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:08.087 19:33:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.345 ************************************ 00:04:08.345 START TEST env_dpdk_post_init 00:04:08.345 ************************************ 00:04:08.345 19:33:25 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:08.345 EAL: Detected CPU lcores: 48 00:04:08.345 EAL: Detected NUMA nodes: 2 00:04:08.345 EAL: Detected shared linkage of DPDK 00:04:08.345 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:08.345 EAL: Selected IOVA mode 'VA' 00:04:08.345 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.345 EAL: VFIO support initialized 00:04:08.345 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:08.345 EAL: Using IOMMU type 1 (Type 1) 00:04:08.345 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:08.345 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:08.345 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:08.345 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:08.345 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:08.345 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:08.345 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:08.345 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:08.345 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:08.345 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:08.345 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:08.603 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:08.603 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:08.603 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:08.603 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:08.603 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:09.168 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:12.444 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:12.444 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:12.702 Starting DPDK initialization... 00:04:12.702 Starting SPDK post initialization... 00:04:12.702 SPDK NVMe probe 00:04:12.702 Attaching to 0000:88:00.0 00:04:12.702 Attached to 0000:88:00.0 00:04:12.702 Cleaning up... 00:04:12.702 00:04:12.702 real 0m4.404s 00:04:12.702 user 0m3.259s 00:04:12.702 sys 0m0.199s 00:04:12.702 19:33:29 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:12.702 19:33:29 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.702 ************************************ 00:04:12.702 END TEST env_dpdk_post_init 00:04:12.702 ************************************ 00:04:12.702 19:33:29 env -- env/env.sh@26 -- # uname 00:04:12.702 19:33:29 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:12.702 19:33:29 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.702 19:33:29 env -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:12.702 19:33:29 env -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:12.702 19:33:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.702 ************************************ 00:04:12.702 START TEST env_mem_callbacks 00:04:12.702 ************************************ 00:04:12.702 19:33:29 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:12.702 EAL: Detected CPU lcores: 48 00:04:12.702 EAL: Detected NUMA nodes: 2 00:04:12.702 EAL: Detected shared linkage of DPDK 00:04:12.702 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:12.702 EAL: Selected IOVA mode 'VA' 00:04:12.702 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.702 EAL: VFIO support initialized 00:04:12.702 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:12.702 00:04:12.702 00:04:12.702 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.702 http://cunit.sourceforge.net/ 00:04:12.702 00:04:12.702 00:04:12.702 Suite: memory 00:04:12.702 Test: test ... 00:04:12.703 register 0x200000200000 2097152 00:04:12.703 malloc 3145728 00:04:12.703 register 0x200000400000 4194304 00:04:12.703 buf 0x200000500000 len 3145728 PASSED 00:04:12.703 malloc 64 00:04:12.703 buf 0x2000004fff40 len 64 PASSED 00:04:12.703 malloc 4194304 00:04:12.703 register 0x200000800000 6291456 00:04:12.703 buf 0x200000a00000 len 4194304 PASSED 00:04:12.703 free 0x200000500000 3145728 00:04:12.703 free 0x2000004fff40 64 00:04:12.703 unregister 0x200000400000 4194304 PASSED 00:04:12.703 free 0x200000a00000 4194304 00:04:12.703 unregister 0x200000800000 6291456 PASSED 00:04:12.703 malloc 8388608 00:04:12.703 register 0x200000400000 10485760 00:04:12.703 buf 0x200000600000 len 8388608 PASSED 00:04:12.703 free 0x200000600000 8388608 00:04:12.703 unregister 0x200000400000 10485760 PASSED 00:04:12.703 passed 00:04:12.703 00:04:12.703 Run Summary: Type Total Ran Passed Failed Inactive 00:04:12.703 suites 1 1 n/a 0 0 00:04:12.703 tests 1 1 1 0 0 00:04:12.703 asserts 15 15 15 0 n/a 00:04:12.703 00:04:12.703 Elapsed time = 0.005 seconds 00:04:12.703 00:04:12.703 real 0m0.047s 00:04:12.703 user 0m0.014s 00:04:12.703 sys 0m0.033s 00:04:12.703 19:33:29 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:12.703 19:33:29 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:12.703 ************************************ 00:04:12.703 END TEST env_mem_callbacks 00:04:12.703 ************************************ 00:04:12.703 00:04:12.703 real 0m6.398s 00:04:12.703 user 0m4.393s 00:04:12.703 sys 0m1.041s 00:04:12.703 19:33:29 env -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:12.703 19:33:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.703 ************************************ 00:04:12.703 END TEST env 00:04:12.703 ************************************ 00:04:12.703 19:33:30 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:12.703 19:33:30 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:12.703 19:33:30 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:12.703 19:33:30 -- common/autotest_common.sh@10 -- # set +x 00:04:12.703 ************************************ 00:04:12.703 START TEST rpc 00:04:12.703 ************************************ 00:04:12.703 19:33:30 rpc -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:12.703 * Looking for test storage... 00:04:12.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:12.960 19:33:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1048563 00:04:12.960 19:33:30 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:12.960 19:33:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.960 19:33:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1048563 00:04:12.961 19:33:30 rpc -- common/autotest_common.sh@832 -- # '[' -z 1048563 ']' 00:04:12.961 19:33:30 rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.961 19:33:30 rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:04:12.961 19:33:30 rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.961 19:33:30 rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:04:12.961 19:33:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.961 [2024-07-24 19:33:30.134901] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:12.961 [2024-07-24 19:33:30.134994] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1048563 ] 00:04:12.961 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.961 [2024-07-24 19:33:30.191693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.961 [2024-07-24 19:33:30.296750] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:12.961 [2024-07-24 19:33:30.296820] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1048563' to capture a snapshot of events at runtime. 00:04:12.961 [2024-07-24 19:33:30.296845] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:12.961 [2024-07-24 19:33:30.296855] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:12.961 [2024-07-24 19:33:30.296865] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1048563 for offline analysis/debug. 00:04:12.961 [2024-07-24 19:33:30.296890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.218 19:33:30 rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:04:13.218 19:33:30 rpc -- common/autotest_common.sh@865 -- # return 0 00:04:13.218 19:33:30 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:13.218 19:33:30 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:13.218 19:33:30 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:13.218 19:33:30 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:13.218 19:33:30 rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:13.218 19:33:30 rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:13.218 19:33:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.218 ************************************ 00:04:13.218 START TEST rpc_integrity 00:04:13.218 ************************************ 00:04:13.218 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # rpc_integrity 00:04:13.218 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.218 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.218 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.218 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.218 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.218 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:13.476 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:13.476 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:13.476 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.476 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.476 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.476 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:13.476 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:13.476 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.476 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.476 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.476 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:13.476 { 00:04:13.476 "name": "Malloc0", 00:04:13.476 "aliases": [ 00:04:13.476 "b56e50c1-8df8-474b-b74f-e2aea3e74fc8" 00:04:13.476 ], 00:04:13.476 "product_name": "Malloc disk", 00:04:13.476 "block_size": 512, 00:04:13.476 "num_blocks": 16384, 00:04:13.476 "uuid": "b56e50c1-8df8-474b-b74f-e2aea3e74fc8", 00:04:13.476 "assigned_rate_limits": { 00:04:13.476 "rw_ios_per_sec": 0, 00:04:13.476 "rw_mbytes_per_sec": 0, 00:04:13.476 "r_mbytes_per_sec": 0, 00:04:13.476 "w_mbytes_per_sec": 0 00:04:13.476 }, 00:04:13.476 "claimed": false, 00:04:13.476 "zoned": false, 00:04:13.476 "supported_io_types": { 00:04:13.476 "read": true, 00:04:13.476 "write": true, 00:04:13.476 "unmap": true, 00:04:13.476 "flush": true, 00:04:13.476 "reset": true, 00:04:13.476 "nvme_admin": false, 00:04:13.476 "nvme_io": false, 00:04:13.476 "nvme_io_md": false, 00:04:13.476 "write_zeroes": true, 00:04:13.476 "zcopy": true, 00:04:13.476 "get_zone_info": false, 00:04:13.476 "zone_management": false, 00:04:13.476 "zone_append": false, 00:04:13.476 "compare": false, 00:04:13.476 "compare_and_write": false, 00:04:13.476 "abort": true, 00:04:13.476 "seek_hole": false, 00:04:13.476 "seek_data": false, 00:04:13.476 "copy": true, 00:04:13.476 "nvme_iov_md": false 00:04:13.476 }, 00:04:13.476 "memory_domains": [ 00:04:13.476 { 00:04:13.476 "dma_device_id": "system", 00:04:13.476 "dma_device_type": 1 00:04:13.476 }, 00:04:13.476 { 00:04:13.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.476 "dma_device_type": 2 00:04:13.476 } 00:04:13.476 ], 00:04:13.476 "driver_specific": {} 00:04:13.476 } 00:04:13.476 ]' 00:04:13.476 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:13.476 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:13.476 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:13.476 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.476 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.476 [2024-07-24 19:33:30.698264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:13.476 [2024-07-24 19:33:30.698330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:13.476 [2024-07-24 19:33:30.698353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6abd30 00:04:13.476 [2024-07-24 19:33:30.698366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:13.476 [2024-07-24 19:33:30.699895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:13.476 [2024-07-24 19:33:30.699924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:13.476 Passthru0 00:04:13.476 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.476 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:13.476 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.476 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.476 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.476 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:13.476 { 00:04:13.476 "name": "Malloc0", 00:04:13.476 "aliases": [ 00:04:13.476 "b56e50c1-8df8-474b-b74f-e2aea3e74fc8" 00:04:13.476 ], 00:04:13.476 "product_name": "Malloc disk", 00:04:13.476 "block_size": 512, 00:04:13.476 "num_blocks": 16384, 00:04:13.476 "uuid": "b56e50c1-8df8-474b-b74f-e2aea3e74fc8", 00:04:13.476 "assigned_rate_limits": { 00:04:13.476 "rw_ios_per_sec": 0, 00:04:13.476 "rw_mbytes_per_sec": 0, 00:04:13.476 "r_mbytes_per_sec": 0, 00:04:13.476 "w_mbytes_per_sec": 0 00:04:13.476 }, 00:04:13.476 "claimed": true, 00:04:13.476 "claim_type": "exclusive_write", 00:04:13.476 "zoned": false, 00:04:13.476 "supported_io_types": { 00:04:13.476 "read": true, 00:04:13.476 "write": true, 00:04:13.476 "unmap": true, 00:04:13.476 "flush": true, 00:04:13.476 "reset": true, 00:04:13.476 "nvme_admin": false, 00:04:13.476 "nvme_io": false, 00:04:13.476 "nvme_io_md": false, 00:04:13.476 "write_zeroes": true, 00:04:13.476 "zcopy": true, 00:04:13.476 "get_zone_info": false, 00:04:13.476 "zone_management": false, 00:04:13.476 "zone_append": false, 00:04:13.476 "compare": false, 00:04:13.476 "compare_and_write": false, 00:04:13.476 "abort": true, 00:04:13.476 "seek_hole": false, 00:04:13.477 "seek_data": false, 00:04:13.477 "copy": true, 00:04:13.477 "nvme_iov_md": false 00:04:13.477 }, 00:04:13.477 "memory_domains": [ 00:04:13.477 { 00:04:13.477 "dma_device_id": "system", 00:04:13.477 "dma_device_type": 1 00:04:13.477 }, 00:04:13.477 { 00:04:13.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.477 "dma_device_type": 2 00:04:13.477 } 00:04:13.477 ], 00:04:13.477 "driver_specific": {} 00:04:13.477 }, 00:04:13.477 { 00:04:13.477 "name": "Passthru0", 00:04:13.477 "aliases": [ 00:04:13.477 "9053a2e7-3a09-5068-8bf9-9e07cfb5cb72" 00:04:13.477 ], 00:04:13.477 "product_name": "passthru", 00:04:13.477 "block_size": 512, 00:04:13.477 "num_blocks": 16384, 00:04:13.477 "uuid": "9053a2e7-3a09-5068-8bf9-9e07cfb5cb72", 00:04:13.477 "assigned_rate_limits": { 00:04:13.477 "rw_ios_per_sec": 0, 00:04:13.477 "rw_mbytes_per_sec": 0, 00:04:13.477 "r_mbytes_per_sec": 0, 00:04:13.477 "w_mbytes_per_sec": 0 00:04:13.477 }, 00:04:13.477 "claimed": false, 00:04:13.477 "zoned": false, 00:04:13.477 "supported_io_types": { 00:04:13.477 "read": true, 00:04:13.477 "write": true, 00:04:13.477 "unmap": true, 00:04:13.477 "flush": true, 00:04:13.477 "reset": true, 00:04:13.477 "nvme_admin": false, 00:04:13.477 "nvme_io": false, 00:04:13.477 "nvme_io_md": false, 00:04:13.477 "write_zeroes": true, 00:04:13.477 "zcopy": true, 00:04:13.477 "get_zone_info": false, 00:04:13.477 "zone_management": false, 00:04:13.477 "zone_append": false, 00:04:13.477 "compare": false, 00:04:13.477 "compare_and_write": false, 00:04:13.477 "abort": true, 00:04:13.477 "seek_hole": false, 00:04:13.477 "seek_data": false, 00:04:13.477 "copy": true, 00:04:13.477 "nvme_iov_md": false 00:04:13.477 }, 00:04:13.477 "memory_domains": [ 00:04:13.477 { 00:04:13.477 "dma_device_id": "system", 00:04:13.477 "dma_device_type": 1 00:04:13.477 }, 00:04:13.477 { 00:04:13.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.477 "dma_device_type": 2 00:04:13.477 } 00:04:13.477 ], 00:04:13.477 "driver_specific": { 00:04:13.477 "passthru": { 00:04:13.477 "name": "Passthru0", 00:04:13.477 "base_bdev_name": "Malloc0" 00:04:13.477 } 00:04:13.477 } 00:04:13.477 } 00:04:13.477 ]' 00:04:13.477 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:13.477 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:13.477 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:13.477 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.477 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.477 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.477 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:13.477 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.477 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.477 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.477 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:13.477 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.477 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.477 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.477 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:13.477 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:13.477 19:33:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:13.477 00:04:13.477 real 0m0.236s 00:04:13.477 user 0m0.156s 00:04:13.477 sys 0m0.022s 00:04:13.477 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:13.477 19:33:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.477 ************************************ 00:04:13.477 END TEST rpc_integrity 00:04:13.477 ************************************ 00:04:13.477 19:33:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:13.477 19:33:30 rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:13.477 19:33:30 rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:13.477 19:33:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.734 ************************************ 00:04:13.734 START TEST rpc_plugins 00:04:13.734 ************************************ 00:04:13.734 19:33:30 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # rpc_plugins 00:04:13.734 19:33:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:13.735 19:33:30 rpc.rpc_plugins -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.735 19:33:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.735 19:33:30 rpc.rpc_plugins -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.735 19:33:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:13.735 19:33:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:13.735 19:33:30 rpc.rpc_plugins -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.735 19:33:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.735 19:33:30 rpc.rpc_plugins -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.735 19:33:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:13.735 { 00:04:13.735 "name": "Malloc1", 00:04:13.735 "aliases": [ 00:04:13.735 "c236e5c2-2e59-430a-a366-5ded616a88b3" 00:04:13.735 ], 00:04:13.735 "product_name": "Malloc disk", 00:04:13.735 "block_size": 4096, 00:04:13.735 "num_blocks": 256, 00:04:13.735 "uuid": "c236e5c2-2e59-430a-a366-5ded616a88b3", 00:04:13.735 "assigned_rate_limits": { 00:04:13.735 "rw_ios_per_sec": 0, 00:04:13.735 "rw_mbytes_per_sec": 0, 00:04:13.735 "r_mbytes_per_sec": 0, 00:04:13.735 "w_mbytes_per_sec": 0 00:04:13.735 }, 00:04:13.735 "claimed": false, 00:04:13.735 "zoned": false, 00:04:13.735 "supported_io_types": { 00:04:13.735 "read": true, 00:04:13.735 "write": true, 00:04:13.735 "unmap": true, 00:04:13.735 "flush": true, 00:04:13.735 "reset": true, 00:04:13.735 "nvme_admin": false, 00:04:13.735 "nvme_io": false, 00:04:13.735 "nvme_io_md": false, 00:04:13.735 "write_zeroes": true, 00:04:13.735 "zcopy": true, 00:04:13.735 "get_zone_info": false, 00:04:13.735 "zone_management": false, 00:04:13.735 "zone_append": false, 00:04:13.735 "compare": false, 00:04:13.735 "compare_and_write": false, 00:04:13.735 "abort": true, 00:04:13.735 "seek_hole": false, 00:04:13.735 "seek_data": false, 00:04:13.735 "copy": true, 00:04:13.735 "nvme_iov_md": false 00:04:13.735 }, 00:04:13.735 "memory_domains": [ 00:04:13.735 { 00:04:13.735 "dma_device_id": "system", 00:04:13.735 "dma_device_type": 1 00:04:13.735 }, 00:04:13.735 { 00:04:13.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.735 "dma_device_type": 2 00:04:13.735 } 00:04:13.735 ], 00:04:13.735 "driver_specific": {} 00:04:13.735 } 00:04:13.735 ]' 00:04:13.735 19:33:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:13.735 19:33:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:13.735 19:33:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:13.735 19:33:30 rpc.rpc_plugins -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.735 19:33:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.735 19:33:30 rpc.rpc_plugins -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.735 19:33:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:13.735 19:33:30 rpc.rpc_plugins -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.735 19:33:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.735 19:33:30 rpc.rpc_plugins -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.735 19:33:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:13.735 19:33:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:13.735 19:33:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:13.735 00:04:13.735 real 0m0.117s 00:04:13.735 user 0m0.078s 00:04:13.735 sys 0m0.008s 00:04:13.735 19:33:30 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:13.735 19:33:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.735 ************************************ 00:04:13.735 END TEST rpc_plugins 00:04:13.735 ************************************ 00:04:13.735 19:33:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:13.735 19:33:31 rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:13.735 19:33:31 rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:13.735 19:33:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.735 ************************************ 00:04:13.735 START TEST rpc_trace_cmd_test 00:04:13.735 ************************************ 00:04:13.735 19:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # rpc_trace_cmd_test 00:04:13.735 19:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:13.735 19:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:13.735 19:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.735 19:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:13.735 19:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.735 19:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:13.735 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1048563", 00:04:13.735 "tpoint_group_mask": "0x8", 00:04:13.735 "iscsi_conn": { 00:04:13.735 "mask": "0x2", 00:04:13.735 "tpoint_mask": "0x0" 00:04:13.735 }, 00:04:13.735 "scsi": { 00:04:13.735 "mask": "0x4", 00:04:13.735 "tpoint_mask": "0x0" 00:04:13.735 }, 00:04:13.735 "bdev": { 00:04:13.735 "mask": "0x8", 00:04:13.735 "tpoint_mask": "0xffffffffffffffff" 00:04:13.735 }, 00:04:13.735 "nvmf_rdma": { 00:04:13.735 "mask": "0x10", 00:04:13.735 "tpoint_mask": "0x0" 00:04:13.735 }, 00:04:13.735 "nvmf_tcp": { 00:04:13.735 "mask": "0x20", 00:04:13.735 "tpoint_mask": "0x0" 00:04:13.735 }, 00:04:13.735 "ftl": { 00:04:13.735 "mask": "0x40", 00:04:13.735 "tpoint_mask": "0x0" 00:04:13.735 }, 00:04:13.735 "blobfs": { 00:04:13.735 "mask": "0x80", 00:04:13.735 "tpoint_mask": "0x0" 00:04:13.735 }, 00:04:13.735 "dsa": { 00:04:13.735 "mask": "0x200", 00:04:13.735 "tpoint_mask": "0x0" 00:04:13.735 }, 00:04:13.735 "thread": { 00:04:13.735 "mask": "0x400", 00:04:13.735 "tpoint_mask": "0x0" 00:04:13.735 }, 00:04:13.735 "nvme_pcie": { 00:04:13.735 "mask": "0x800", 00:04:13.735 "tpoint_mask": "0x0" 00:04:13.735 }, 00:04:13.735 "iaa": { 00:04:13.735 "mask": "0x1000", 00:04:13.735 "tpoint_mask": "0x0" 00:04:13.735 }, 00:04:13.735 "nvme_tcp": { 00:04:13.735 "mask": "0x2000", 00:04:13.735 "tpoint_mask": "0x0" 00:04:13.735 }, 00:04:13.735 "bdev_nvme": { 00:04:13.735 "mask": "0x4000", 00:04:13.735 "tpoint_mask": "0x0" 00:04:13.735 }, 00:04:13.735 "sock": { 00:04:13.735 "mask": "0x8000", 00:04:13.735 "tpoint_mask": "0x0" 00:04:13.735 } 00:04:13.735 }' 00:04:13.735 19:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:13.735 19:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:13.735 19:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:13.735 19:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:13.993 19:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:13.993 19:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:13.993 19:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:13.993 19:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:13.993 19:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:13.993 19:33:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:13.993 00:04:13.993 real 0m0.200s 00:04:13.993 user 0m0.178s 00:04:13.993 sys 0m0.015s 00:04:13.993 19:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:13.993 19:33:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:13.993 ************************************ 00:04:13.993 END TEST rpc_trace_cmd_test 00:04:13.993 ************************************ 00:04:13.993 19:33:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:13.993 19:33:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:13.993 19:33:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:13.993 19:33:31 rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:13.993 19:33:31 rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:13.993 19:33:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.993 ************************************ 00:04:13.993 START TEST rpc_daemon_integrity 00:04:13.993 ************************************ 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # rpc_integrity 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:13.993 { 00:04:13.993 "name": "Malloc2", 00:04:13.993 "aliases": [ 00:04:13.993 "0362f7c8-9849-4828-822e-18f6779b1cd3" 00:04:13.993 ], 00:04:13.993 "product_name": "Malloc disk", 00:04:13.993 "block_size": 512, 00:04:13.993 "num_blocks": 16384, 00:04:13.993 "uuid": "0362f7c8-9849-4828-822e-18f6779b1cd3", 00:04:13.993 "assigned_rate_limits": { 00:04:13.993 "rw_ios_per_sec": 0, 00:04:13.993 "rw_mbytes_per_sec": 0, 00:04:13.993 "r_mbytes_per_sec": 0, 00:04:13.993 "w_mbytes_per_sec": 0 00:04:13.993 }, 00:04:13.993 "claimed": false, 00:04:13.993 "zoned": false, 00:04:13.993 "supported_io_types": { 00:04:13.993 "read": true, 00:04:13.993 "write": true, 00:04:13.993 "unmap": true, 00:04:13.993 "flush": true, 00:04:13.993 "reset": true, 00:04:13.993 "nvme_admin": false, 00:04:13.993 "nvme_io": false, 00:04:13.993 "nvme_io_md": false, 00:04:13.993 "write_zeroes": true, 00:04:13.993 "zcopy": true, 00:04:13.993 "get_zone_info": false, 00:04:13.993 "zone_management": false, 00:04:13.993 "zone_append": false, 00:04:13.993 "compare": false, 00:04:13.993 "compare_and_write": false, 00:04:13.993 "abort": true, 00:04:13.993 "seek_hole": false, 00:04:13.993 "seek_data": false, 00:04:13.993 "copy": true, 00:04:13.993 "nvme_iov_md": false 00:04:13.993 }, 00:04:13.993 "memory_domains": [ 00:04:13.993 { 00:04:13.993 "dma_device_id": "system", 00:04:13.993 "dma_device_type": 1 00:04:13.993 }, 00:04:13.993 { 00:04:13.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.993 "dma_device_type": 2 00:04:13.993 } 00:04:13.993 ], 00:04:13.993 "driver_specific": {} 00:04:13.993 } 00:04:13.993 ]' 00:04:13.993 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.250 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.250 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:14.250 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:14.250 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.250 [2024-07-24 19:33:31.388532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:14.250 [2024-07-24 19:33:31.388585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.250 [2024-07-24 19:33:31.388614] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6a4dc0 00:04:14.250 [2024-07-24 19:33:31.388630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.250 [2024-07-24 19:33:31.390008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.250 [2024-07-24 19:33:31.390038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.250 Passthru0 00:04:14.250 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:14.250 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.250 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:14.250 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.251 { 00:04:14.251 "name": "Malloc2", 00:04:14.251 "aliases": [ 00:04:14.251 "0362f7c8-9849-4828-822e-18f6779b1cd3" 00:04:14.251 ], 00:04:14.251 "product_name": "Malloc disk", 00:04:14.251 "block_size": 512, 00:04:14.251 "num_blocks": 16384, 00:04:14.251 "uuid": "0362f7c8-9849-4828-822e-18f6779b1cd3", 00:04:14.251 "assigned_rate_limits": { 00:04:14.251 "rw_ios_per_sec": 0, 00:04:14.251 "rw_mbytes_per_sec": 0, 00:04:14.251 "r_mbytes_per_sec": 0, 00:04:14.251 "w_mbytes_per_sec": 0 00:04:14.251 }, 00:04:14.251 "claimed": true, 00:04:14.251 "claim_type": "exclusive_write", 00:04:14.251 "zoned": false, 00:04:14.251 "supported_io_types": { 00:04:14.251 "read": true, 00:04:14.251 "write": true, 00:04:14.251 "unmap": true, 00:04:14.251 "flush": true, 00:04:14.251 "reset": true, 00:04:14.251 "nvme_admin": false, 00:04:14.251 "nvme_io": false, 00:04:14.251 "nvme_io_md": false, 00:04:14.251 "write_zeroes": true, 00:04:14.251 "zcopy": true, 00:04:14.251 "get_zone_info": false, 00:04:14.251 "zone_management": false, 00:04:14.251 "zone_append": false, 00:04:14.251 "compare": false, 00:04:14.251 "compare_and_write": false, 00:04:14.251 "abort": true, 00:04:14.251 "seek_hole": false, 00:04:14.251 "seek_data": false, 00:04:14.251 "copy": true, 00:04:14.251 "nvme_iov_md": false 00:04:14.251 }, 00:04:14.251 "memory_domains": [ 00:04:14.251 { 00:04:14.251 "dma_device_id": "system", 00:04:14.251 "dma_device_type": 1 00:04:14.251 }, 00:04:14.251 { 00:04:14.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.251 "dma_device_type": 2 00:04:14.251 } 00:04:14.251 ], 00:04:14.251 "driver_specific": {} 00:04:14.251 }, 00:04:14.251 { 00:04:14.251 "name": "Passthru0", 00:04:14.251 "aliases": [ 00:04:14.251 "a8935117-1c8f-5c13-bad6-9ac5a99f2a47" 00:04:14.251 ], 00:04:14.251 "product_name": "passthru", 00:04:14.251 "block_size": 512, 00:04:14.251 "num_blocks": 16384, 00:04:14.251 "uuid": "a8935117-1c8f-5c13-bad6-9ac5a99f2a47", 00:04:14.251 "assigned_rate_limits": { 00:04:14.251 "rw_ios_per_sec": 0, 00:04:14.251 "rw_mbytes_per_sec": 0, 00:04:14.251 "r_mbytes_per_sec": 0, 00:04:14.251 "w_mbytes_per_sec": 0 00:04:14.251 }, 00:04:14.251 "claimed": false, 00:04:14.251 "zoned": false, 00:04:14.251 "supported_io_types": { 00:04:14.251 "read": true, 00:04:14.251 "write": true, 00:04:14.251 "unmap": true, 00:04:14.251 "flush": true, 00:04:14.251 "reset": true, 00:04:14.251 "nvme_admin": false, 00:04:14.251 "nvme_io": false, 00:04:14.251 "nvme_io_md": false, 00:04:14.251 "write_zeroes": true, 00:04:14.251 "zcopy": true, 00:04:14.251 "get_zone_info": false, 00:04:14.251 "zone_management": false, 00:04:14.251 "zone_append": false, 00:04:14.251 "compare": false, 00:04:14.251 "compare_and_write": false, 00:04:14.251 "abort": true, 00:04:14.251 "seek_hole": false, 00:04:14.251 "seek_data": false, 00:04:14.251 "copy": true, 00:04:14.251 "nvme_iov_md": false 00:04:14.251 }, 00:04:14.251 "memory_domains": [ 00:04:14.251 { 00:04:14.251 "dma_device_id": "system", 00:04:14.251 "dma_device_type": 1 00:04:14.251 }, 00:04:14.251 { 00:04:14.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.251 "dma_device_type": 2 00:04:14.251 } 00:04:14.251 ], 00:04:14.251 "driver_specific": { 00:04:14.251 "passthru": { 00:04:14.251 "name": "Passthru0", 00:04:14.251 "base_bdev_name": "Malloc2" 00:04:14.251 } 00:04:14.251 } 00:04:14.251 } 00:04:14.251 ]' 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.251 00:04:14.251 real 0m0.234s 00:04:14.251 user 0m0.157s 00:04:14.251 sys 0m0.022s 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:14.251 19:33:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.251 ************************************ 00:04:14.251 END TEST rpc_daemon_integrity 00:04:14.251 ************************************ 00:04:14.251 19:33:31 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:14.251 19:33:31 rpc -- rpc/rpc.sh@84 -- # killprocess 1048563 00:04:14.251 19:33:31 rpc -- common/autotest_common.sh@951 -- # '[' -z 1048563 ']' 00:04:14.251 19:33:31 rpc -- common/autotest_common.sh@955 -- # kill -0 1048563 00:04:14.251 19:33:31 rpc -- common/autotest_common.sh@956 -- # uname 00:04:14.251 19:33:31 rpc -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:04:14.251 19:33:31 rpc -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1048563 00:04:14.251 19:33:31 rpc -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:04:14.251 19:33:31 rpc -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:04:14.251 19:33:31 rpc -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1048563' 00:04:14.251 killing process with pid 1048563 00:04:14.251 19:33:31 rpc -- common/autotest_common.sh@970 -- # kill 1048563 00:04:14.251 19:33:31 rpc -- common/autotest_common.sh@975 -- # wait 1048563 00:04:14.817 00:04:14.817 real 0m1.966s 00:04:14.817 user 0m2.492s 00:04:14.817 sys 0m0.562s 00:04:14.817 19:33:32 rpc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:14.817 19:33:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.817 ************************************ 00:04:14.817 END TEST rpc 00:04:14.817 ************************************ 00:04:14.817 19:33:32 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:14.817 19:33:32 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:14.817 19:33:32 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:14.817 19:33:32 -- common/autotest_common.sh@10 -- # set +x 00:04:14.817 ************************************ 00:04:14.817 START TEST skip_rpc 00:04:14.817 ************************************ 00:04:14.817 19:33:32 skip_rpc -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:14.817 * Looking for test storage... 00:04:14.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.817 19:33:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:14.817 19:33:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:14.817 19:33:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:14.817 19:33:32 skip_rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:14.817 19:33:32 skip_rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:14.817 19:33:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.817 ************************************ 00:04:14.817 START TEST skip_rpc 00:04:14.817 ************************************ 00:04:14.817 19:33:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # test_skip_rpc 00:04:14.817 19:33:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1048999 00:04:14.817 19:33:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:14.817 19:33:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.817 19:33:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:14.817 [2024-07-24 19:33:32.172426] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:14.817 [2024-07-24 19:33:32.172516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1048999 ] 00:04:15.074 EAL: No free 2048 kB hugepages reported on node 1 00:04:15.074 [2024-07-24 19:33:32.229419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.074 [2024-07-24 19:33:32.340568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # local es=0 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@639 -- # local arg=rpc_cmd 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@643 -- # type -t rpc_cmd 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # rpc_cmd spdk_get_version 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # es=1 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1048999 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' -z 1048999 ']' 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # kill -0 1048999 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # uname 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1048999 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1048999' 00:04:20.380 killing process with pid 1048999 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # kill 1048999 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@975 -- # wait 1048999 00:04:20.380 00:04:20.380 real 0m5.483s 00:04:20.380 user 0m5.166s 00:04:20.380 sys 0m0.324s 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:20.380 19:33:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.380 ************************************ 00:04:20.380 END TEST skip_rpc 00:04:20.380 ************************************ 00:04:20.380 19:33:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:20.380 19:33:37 skip_rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:20.380 19:33:37 skip_rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:20.380 19:33:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.380 ************************************ 00:04:20.380 START TEST skip_rpc_with_json 00:04:20.380 ************************************ 00:04:20.380 19:33:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # test_skip_rpc_with_json 00:04:20.380 19:33:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:20.380 19:33:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1049689 00:04:20.380 19:33:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.380 19:33:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.380 19:33:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1049689 00:04:20.380 19:33:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # '[' -z 1049689 ']' 00:04:20.380 19:33:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.380 19:33:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local max_retries=100 00:04:20.380 19:33:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.380 19:33:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@841 -- # xtrace_disable 00:04:20.380 19:33:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.380 [2024-07-24 19:33:37.704691] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:20.380 [2024-07-24 19:33:37.704779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049689 ] 00:04:20.380 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.638 [2024-07-24 19:33:37.762637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.638 [2024-07-24 19:33:37.872824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.895 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:04:20.895 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@865 -- # return 0 00:04:20.895 19:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:20.895 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:20.895 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.895 [2024-07-24 19:33:38.133497] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:20.895 request: 00:04:20.895 { 00:04:20.895 "trtype": "tcp", 00:04:20.895 "method": "nvmf_get_transports", 00:04:20.895 "req_id": 1 00:04:20.895 } 00:04:20.895 Got JSON-RPC error response 00:04:20.895 response: 00:04:20.895 { 00:04:20.895 "code": -19, 00:04:20.895 "message": "No such device" 00:04:20.895 } 00:04:20.895 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:04:20.895 19:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:20.895 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:20.895 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.895 [2024-07-24 19:33:38.141635] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:20.895 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:20.895 19:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:20.895 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:20.895 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:21.153 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:21.153 19:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:21.153 { 00:04:21.153 "subsystems": [ 00:04:21.153 { 00:04:21.153 "subsystem": "vfio_user_target", 00:04:21.153 "config": null 00:04:21.153 }, 00:04:21.153 { 00:04:21.153 "subsystem": "keyring", 00:04:21.153 "config": [] 00:04:21.153 }, 00:04:21.153 { 00:04:21.153 "subsystem": "iobuf", 00:04:21.153 "config": [ 00:04:21.153 { 00:04:21.153 "method": "iobuf_set_options", 00:04:21.153 "params": { 00:04:21.153 "small_pool_count": 8192, 00:04:21.153 "large_pool_count": 1024, 00:04:21.153 "small_bufsize": 8192, 00:04:21.153 "large_bufsize": 135168 00:04:21.153 } 00:04:21.153 } 00:04:21.153 ] 00:04:21.153 }, 00:04:21.153 { 00:04:21.153 "subsystem": "sock", 00:04:21.153 "config": [ 00:04:21.153 { 00:04:21.153 "method": "sock_set_default_impl", 00:04:21.153 "params": { 00:04:21.153 "impl_name": "posix" 00:04:21.153 } 00:04:21.153 }, 00:04:21.153 { 00:04:21.153 "method": "sock_impl_set_options", 00:04:21.153 "params": { 00:04:21.153 "impl_name": "ssl", 00:04:21.153 "recv_buf_size": 4096, 00:04:21.153 "send_buf_size": 4096, 00:04:21.153 "enable_recv_pipe": true, 00:04:21.153 "enable_quickack": false, 00:04:21.153 "enable_placement_id": 0, 00:04:21.153 "enable_zerocopy_send_server": true, 00:04:21.153 "enable_zerocopy_send_client": false, 00:04:21.153 "zerocopy_threshold": 0, 00:04:21.153 "tls_version": 0, 00:04:21.153 "enable_ktls": false 00:04:21.153 } 00:04:21.153 }, 00:04:21.153 { 00:04:21.153 "method": "sock_impl_set_options", 00:04:21.153 "params": { 00:04:21.153 "impl_name": "posix", 00:04:21.153 "recv_buf_size": 2097152, 00:04:21.153 "send_buf_size": 2097152, 00:04:21.153 "enable_recv_pipe": true, 00:04:21.153 "enable_quickack": false, 00:04:21.153 "enable_placement_id": 0, 00:04:21.153 "enable_zerocopy_send_server": true, 00:04:21.153 "enable_zerocopy_send_client": false, 00:04:21.153 "zerocopy_threshold": 0, 00:04:21.153 "tls_version": 0, 00:04:21.153 "enable_ktls": false 00:04:21.153 } 00:04:21.153 } 00:04:21.153 ] 00:04:21.153 }, 00:04:21.153 { 00:04:21.153 "subsystem": "vmd", 00:04:21.153 "config": [] 00:04:21.153 }, 00:04:21.153 { 00:04:21.153 "subsystem": "accel", 00:04:21.153 "config": [ 00:04:21.153 { 00:04:21.153 "method": "accel_set_options", 00:04:21.153 "params": { 00:04:21.153 "small_cache_size": 128, 00:04:21.153 "large_cache_size": 16, 00:04:21.153 "task_count": 2048, 00:04:21.153 "sequence_count": 2048, 00:04:21.153 "buf_count": 2048 00:04:21.153 } 00:04:21.153 } 00:04:21.153 ] 00:04:21.153 }, 00:04:21.153 { 00:04:21.153 "subsystem": "bdev", 00:04:21.153 "config": [ 00:04:21.153 { 00:04:21.153 "method": "bdev_set_options", 00:04:21.153 "params": { 00:04:21.153 "bdev_io_pool_size": 65535, 00:04:21.153 "bdev_io_cache_size": 256, 00:04:21.153 "bdev_auto_examine": true, 00:04:21.153 "iobuf_small_cache_size": 128, 00:04:21.153 "iobuf_large_cache_size": 16 00:04:21.153 } 00:04:21.153 }, 00:04:21.153 { 00:04:21.153 "method": "bdev_raid_set_options", 00:04:21.153 "params": { 00:04:21.153 "process_window_size_kb": 1024, 00:04:21.153 "process_max_bandwidth_mb_sec": 0 00:04:21.153 } 00:04:21.153 }, 00:04:21.153 { 00:04:21.153 "method": "bdev_iscsi_set_options", 00:04:21.153 "params": { 00:04:21.153 "timeout_sec": 30 00:04:21.153 } 00:04:21.153 }, 00:04:21.153 { 00:04:21.153 "method": "bdev_nvme_set_options", 00:04:21.153 "params": { 00:04:21.153 "action_on_timeout": "none", 00:04:21.153 "timeout_us": 0, 00:04:21.153 "timeout_admin_us": 0, 00:04:21.153 "keep_alive_timeout_ms": 10000, 00:04:21.153 "arbitration_burst": 0, 00:04:21.153 "low_priority_weight": 0, 00:04:21.153 "medium_priority_weight": 0, 00:04:21.153 "high_priority_weight": 0, 00:04:21.153 "nvme_adminq_poll_period_us": 10000, 00:04:21.153 "nvme_ioq_poll_period_us": 0, 00:04:21.153 "io_queue_requests": 0, 00:04:21.153 "delay_cmd_submit": true, 00:04:21.153 "transport_retry_count": 4, 00:04:21.153 "bdev_retry_count": 3, 00:04:21.153 "transport_ack_timeout": 0, 00:04:21.153 "ctrlr_loss_timeout_sec": 0, 00:04:21.153 "reconnect_delay_sec": 0, 00:04:21.153 "fast_io_fail_timeout_sec": 0, 00:04:21.153 "disable_auto_failback": false, 00:04:21.153 "generate_uuids": false, 00:04:21.153 "transport_tos": 0, 00:04:21.153 "nvme_error_stat": false, 00:04:21.153 "rdma_srq_size": 0, 00:04:21.153 "io_path_stat": false, 00:04:21.153 "allow_accel_sequence": false, 00:04:21.153 "rdma_max_cq_size": 0, 00:04:21.153 "rdma_cm_event_timeout_ms": 0, 00:04:21.153 "dhchap_digests": [ 00:04:21.153 "sha256", 00:04:21.153 "sha384", 00:04:21.153 "sha512" 00:04:21.153 ], 00:04:21.153 "dhchap_dhgroups": [ 00:04:21.153 "null", 00:04:21.153 "ffdhe2048", 00:04:21.153 "ffdhe3072", 00:04:21.153 "ffdhe4096", 00:04:21.153 "ffdhe6144", 00:04:21.153 "ffdhe8192" 00:04:21.153 ] 00:04:21.153 } 00:04:21.153 }, 00:04:21.153 { 00:04:21.153 "method": "bdev_nvme_set_hotplug", 00:04:21.153 "params": { 00:04:21.153 "period_us": 100000, 00:04:21.153 "enable": false 00:04:21.153 } 00:04:21.153 }, 00:04:21.153 { 00:04:21.153 "method": "bdev_wait_for_examine" 00:04:21.153 } 00:04:21.153 ] 00:04:21.153 }, 00:04:21.153 { 00:04:21.153 "subsystem": "scsi", 00:04:21.153 "config": null 00:04:21.154 }, 00:04:21.154 { 00:04:21.154 "subsystem": "scheduler", 00:04:21.154 "config": [ 00:04:21.154 { 00:04:21.154 "method": "framework_set_scheduler", 00:04:21.154 "params": { 00:04:21.154 "name": "static" 00:04:21.154 } 00:04:21.154 } 00:04:21.154 ] 00:04:21.154 }, 00:04:21.154 { 00:04:21.154 "subsystem": "vhost_scsi", 00:04:21.154 "config": [] 00:04:21.154 }, 00:04:21.154 { 00:04:21.154 "subsystem": "vhost_blk", 00:04:21.154 "config": [] 00:04:21.154 }, 00:04:21.154 { 00:04:21.154 "subsystem": "ublk", 00:04:21.154 "config": [] 00:04:21.154 }, 00:04:21.154 { 00:04:21.154 "subsystem": "nbd", 00:04:21.154 "config": [] 00:04:21.154 }, 00:04:21.154 { 00:04:21.154 "subsystem": "nvmf", 00:04:21.154 "config": [ 00:04:21.154 { 00:04:21.154 "method": "nvmf_set_config", 00:04:21.154 "params": { 00:04:21.154 "discovery_filter": "match_any", 00:04:21.154 "admin_cmd_passthru": { 00:04:21.154 "identify_ctrlr": false 00:04:21.154 } 00:04:21.154 } 00:04:21.154 }, 00:04:21.154 { 00:04:21.154 "method": "nvmf_set_max_subsystems", 00:04:21.154 "params": { 00:04:21.154 "max_subsystems": 1024 00:04:21.154 } 00:04:21.154 }, 00:04:21.154 { 00:04:21.154 "method": "nvmf_set_crdt", 00:04:21.154 "params": { 00:04:21.154 "crdt1": 0, 00:04:21.154 "crdt2": 0, 00:04:21.154 "crdt3": 0 00:04:21.154 } 00:04:21.154 }, 00:04:21.154 { 00:04:21.154 "method": "nvmf_create_transport", 00:04:21.154 "params": { 00:04:21.154 "trtype": "TCP", 00:04:21.154 "max_queue_depth": 128, 00:04:21.154 "max_io_qpairs_per_ctrlr": 127, 00:04:21.154 "in_capsule_data_size": 4096, 00:04:21.154 "max_io_size": 131072, 00:04:21.154 "io_unit_size": 131072, 00:04:21.154 "max_aq_depth": 128, 00:04:21.154 "num_shared_buffers": 511, 00:04:21.154 "buf_cache_size": 4294967295, 00:04:21.154 "dif_insert_or_strip": false, 00:04:21.154 "zcopy": false, 00:04:21.154 "c2h_success": true, 00:04:21.154 "sock_priority": 0, 00:04:21.154 "abort_timeout_sec": 1, 00:04:21.154 "ack_timeout": 0, 00:04:21.154 "data_wr_pool_size": 0 00:04:21.154 } 00:04:21.154 } 00:04:21.154 ] 00:04:21.154 }, 00:04:21.154 { 00:04:21.154 "subsystem": "iscsi", 00:04:21.154 "config": [ 00:04:21.154 { 00:04:21.154 "method": "iscsi_set_options", 00:04:21.154 "params": { 00:04:21.154 "node_base": "iqn.2016-06.io.spdk", 00:04:21.154 "max_sessions": 128, 00:04:21.154 "max_connections_per_session": 2, 00:04:21.154 "max_queue_depth": 64, 00:04:21.154 "default_time2wait": 2, 00:04:21.154 "default_time2retain": 20, 00:04:21.154 "first_burst_length": 8192, 00:04:21.154 "immediate_data": true, 00:04:21.154 "allow_duplicated_isid": false, 00:04:21.154 "error_recovery_level": 0, 00:04:21.154 "nop_timeout": 60, 00:04:21.154 "nop_in_interval": 30, 00:04:21.154 "disable_chap": false, 00:04:21.154 "require_chap": false, 00:04:21.154 "mutual_chap": false, 00:04:21.154 "chap_group": 0, 00:04:21.154 "max_large_datain_per_connection": 64, 00:04:21.154 "max_r2t_per_connection": 4, 00:04:21.154 "pdu_pool_size": 36864, 00:04:21.154 "immediate_data_pool_size": 16384, 00:04:21.154 "data_out_pool_size": 2048 00:04:21.154 } 00:04:21.154 } 00:04:21.154 ] 00:04:21.154 } 00:04:21.154 ] 00:04:21.154 } 00:04:21.154 19:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:21.154 19:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1049689 00:04:21.154 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' -z 1049689 ']' 00:04:21.154 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # kill -0 1049689 00:04:21.154 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # uname 00:04:21.154 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:04:21.154 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1049689 00:04:21.154 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:04:21.154 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:04:21.154 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1049689' 00:04:21.154 killing process with pid 1049689 00:04:21.154 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # kill 1049689 00:04:21.154 19:33:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@975 -- # wait 1049689 00:04:21.411 19:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1049837 00:04:21.411 19:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:21.411 19:33:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:26.670 19:33:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1049837 00:04:26.670 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' -z 1049837 ']' 00:04:26.670 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # kill -0 1049837 00:04:26.670 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # uname 00:04:26.670 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:04:26.670 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1049837 00:04:26.670 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:04:26.670 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:04:26.670 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1049837' 00:04:26.670 killing process with pid 1049837 00:04:26.670 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # kill 1049837 00:04:26.670 19:33:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@975 -- # wait 1049837 00:04:26.928 19:33:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:26.928 19:33:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:26.928 00:04:26.928 real 0m6.628s 00:04:26.928 user 0m6.232s 00:04:26.928 sys 0m0.686s 00:04:26.928 19:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:26.928 19:33:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:26.928 ************************************ 00:04:26.928 END TEST skip_rpc_with_json 00:04:26.928 ************************************ 00:04:26.928 19:33:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:26.928 19:33:44 skip_rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:26.928 19:33:44 skip_rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:26.928 19:33:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.187 ************************************ 00:04:27.187 START TEST skip_rpc_with_delay 00:04:27.187 ************************************ 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # test_skip_rpc_with_delay 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # local es=0 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@639 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:27.187 [2024-07-24 19:33:44.381906] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:27.187 [2024-07-24 19:33:44.382013] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # es=1 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:04:27.187 00:04:27.187 real 0m0.069s 00:04:27.187 user 0m0.047s 00:04:27.187 sys 0m0.021s 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:27.187 19:33:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:27.187 ************************************ 00:04:27.187 END TEST skip_rpc_with_delay 00:04:27.187 ************************************ 00:04:27.187 19:33:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:27.187 19:33:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:27.187 19:33:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:27.187 19:33:44 skip_rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:27.187 19:33:44 skip_rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:27.187 19:33:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.187 ************************************ 00:04:27.187 START TEST exit_on_failed_rpc_init 00:04:27.187 ************************************ 00:04:27.187 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # test_exit_on_failed_rpc_init 00:04:27.187 19:33:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1050549 00:04:27.187 19:33:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:27.187 19:33:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1050549 00:04:27.187 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # '[' -z 1050549 ']' 00:04:27.187 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.187 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local max_retries=100 00:04:27.187 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.187 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@841 -- # xtrace_disable 00:04:27.187 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.187 [2024-07-24 19:33:44.499105] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:27.187 [2024-07-24 19:33:44.499192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050549 ] 00:04:27.187 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.187 [2024-07-24 19:33:44.555631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.445 [2024-07-24 19:33:44.666853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.704 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:04:27.704 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@865 -- # return 0 00:04:27.704 19:33:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.704 19:33:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.704 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # local es=0 00:04:27.704 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.704 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@639 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.704 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:04:27.704 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.704 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:04:27.704 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.704 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:04:27.704 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:27.704 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:27.704 19:33:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:27.704 [2024-07-24 19:33:44.975687] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:27.704 [2024-07-24 19:33:44.975770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050561 ] 00:04:27.704 EAL: No free 2048 kB hugepages reported on node 1 00:04:27.704 [2024-07-24 19:33:45.037122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.962 [2024-07-24 19:33:45.157623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.962 [2024-07-24 19:33:45.157750] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:27.962 [2024-07-24 19:33:45.157772] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:27.962 [2024-07-24 19:33:45.157786] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # es=234 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # es=106 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # case "$es" in 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@671 -- # es=1 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1050549 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' -z 1050549 ']' 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # kill -0 1050549 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # uname 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1050549 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1050549' 00:04:27.962 killing process with pid 1050549 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # kill 1050549 00:04:27.962 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@975 -- # wait 1050549 00:04:28.527 00:04:28.527 real 0m1.327s 00:04:28.527 user 0m1.498s 00:04:28.527 sys 0m0.450s 00:04:28.527 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:28.527 19:33:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.527 ************************************ 00:04:28.527 END TEST exit_on_failed_rpc_init 00:04:28.527 ************************************ 00:04:28.528 19:33:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:28.528 00:04:28.528 real 0m13.749s 00:04:28.528 user 0m13.047s 00:04:28.528 sys 0m1.635s 00:04:28.528 19:33:45 skip_rpc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:28.528 19:33:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.528 ************************************ 00:04:28.528 END TEST skip_rpc 00:04:28.528 ************************************ 00:04:28.528 19:33:45 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:28.528 19:33:45 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:28.528 19:33:45 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:28.528 19:33:45 -- common/autotest_common.sh@10 -- # set +x 00:04:28.528 ************************************ 00:04:28.528 START TEST rpc_client 00:04:28.528 ************************************ 00:04:28.528 19:33:45 rpc_client -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:28.528 * Looking for test storage... 00:04:28.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:28.528 19:33:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:28.786 OK 00:04:28.786 19:33:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:28.786 00:04:28.786 real 0m0.066s 00:04:28.786 user 0m0.022s 00:04:28.786 sys 0m0.048s 00:04:28.786 19:33:45 rpc_client -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:28.786 19:33:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:28.786 ************************************ 00:04:28.786 END TEST rpc_client 00:04:28.786 ************************************ 00:04:28.786 19:33:45 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:28.786 19:33:45 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:28.786 19:33:45 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:28.786 19:33:45 -- common/autotest_common.sh@10 -- # set +x 00:04:28.786 ************************************ 00:04:28.786 START TEST json_config 00:04:28.786 ************************************ 00:04:28.786 19:33:45 json_config -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:28.786 19:33:45 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:28.786 19:33:45 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:28.786 19:33:46 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:28.786 19:33:46 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:28.786 19:33:46 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:28.786 19:33:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.786 19:33:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.786 19:33:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.786 19:33:46 json_config -- paths/export.sh@5 -- # export PATH 00:04:28.786 19:33:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@51 -- # : 0 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:28.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:28.786 19:33:46 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:28.786 19:33:46 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:28.786 19:33:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:28.786 19:33:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:28.786 19:33:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:28.786 19:33:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:28.786 19:33:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:28.786 19:33:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:28.786 19:33:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:28.786 19:33:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:28.786 19:33:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:28.787 19:33:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:28.787 19:33:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:28.787 19:33:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:28.787 19:33:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:28.787 19:33:46 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:28.787 19:33:46 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:28.787 INFO: JSON configuration test init 00:04:28.787 19:33:46 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:28.787 19:33:46 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:28.787 19:33:46 json_config -- common/autotest_common.sh@725 -- # xtrace_disable 00:04:28.787 19:33:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.787 19:33:46 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:28.787 19:33:46 json_config -- common/autotest_common.sh@725 -- # xtrace_disable 00:04:28.787 19:33:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.787 19:33:46 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:28.787 19:33:46 json_config -- json_config/common.sh@9 -- # local app=target 00:04:28.787 19:33:46 json_config -- json_config/common.sh@10 -- # shift 00:04:28.787 19:33:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:28.787 19:33:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:28.787 19:33:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:28.787 19:33:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.787 19:33:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:28.787 19:33:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1050803 00:04:28.787 19:33:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:28.787 19:33:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:28.787 Waiting for target to run... 00:04:28.787 19:33:46 json_config -- json_config/common.sh@25 -- # waitforlisten 1050803 /var/tmp/spdk_tgt.sock 00:04:28.787 19:33:46 json_config -- common/autotest_common.sh@832 -- # '[' -z 1050803 ']' 00:04:28.787 19:33:46 json_config -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:28.787 19:33:46 json_config -- common/autotest_common.sh@837 -- # local max_retries=100 00:04:28.787 19:33:46 json_config -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:28.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:28.787 19:33:46 json_config -- common/autotest_common.sh@841 -- # xtrace_disable 00:04:28.787 19:33:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.787 [2024-07-24 19:33:46.072391] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:28.787 [2024-07-24 19:33:46.072491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1050803 ] 00:04:28.787 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.352 [2024-07-24 19:33:46.561330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.352 [2024-07-24 19:33:46.669181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.917 19:33:47 json_config -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:04:29.917 19:33:47 json_config -- common/autotest_common.sh@865 -- # return 0 00:04:29.917 19:33:47 json_config -- json_config/common.sh@26 -- # echo '' 00:04:29.917 00:04:29.917 19:33:47 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:29.917 19:33:47 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:29.917 19:33:47 json_config -- common/autotest_common.sh@725 -- # xtrace_disable 00:04:29.917 19:33:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.917 19:33:47 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:29.917 19:33:47 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:29.917 19:33:47 json_config -- common/autotest_common.sh@731 -- # xtrace_disable 00:04:29.917 19:33:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.917 19:33:47 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:29.917 19:33:47 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:29.917 19:33:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:33.195 19:33:50 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:33.195 19:33:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:33.195 19:33:50 json_config -- common/autotest_common.sh@725 -- # xtrace_disable 00:04:33.195 19:33:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.195 19:33:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:33.196 19:33:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@51 -- # sort 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:33.196 19:33:50 json_config -- common/autotest_common.sh@731 -- # xtrace_disable 00:04:33.196 19:33:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:33.196 19:33:50 json_config -- common/autotest_common.sh@725 -- # xtrace_disable 00:04:33.196 19:33:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:33.196 19:33:50 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:33.196 19:33:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:33.453 MallocForNvmf0 00:04:33.453 19:33:50 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:33.453 19:33:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:33.710 MallocForNvmf1 00:04:33.710 19:33:51 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:33.710 19:33:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:33.966 [2024-07-24 19:33:51.232915] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.966 19:33:51 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:33.966 19:33:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:34.223 19:33:51 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:34.223 19:33:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:34.481 19:33:51 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:34.481 19:33:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:34.739 19:33:51 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:34.739 19:33:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:34.997 [2024-07-24 19:33:52.224160] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.997 19:33:52 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:34.997 19:33:52 json_config -- common/autotest_common.sh@731 -- # xtrace_disable 00:04:34.997 19:33:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.997 19:33:52 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:34.997 19:33:52 json_config -- common/autotest_common.sh@731 -- # xtrace_disable 00:04:34.997 19:33:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.997 19:33:52 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:34.997 19:33:52 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:34.997 19:33:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:35.255 MallocBdevForConfigChangeCheck 00:04:35.255 19:33:52 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:35.255 19:33:52 json_config -- common/autotest_common.sh@731 -- # xtrace_disable 00:04:35.255 19:33:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.255 19:33:52 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:35.255 19:33:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.820 19:33:52 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:35.821 INFO: shutting down applications... 00:04:35.821 19:33:52 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:35.821 19:33:52 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:35.821 19:33:52 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:35.821 19:33:52 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:37.189 Calling clear_iscsi_subsystem 00:04:37.189 Calling clear_nvmf_subsystem 00:04:37.189 Calling clear_nbd_subsystem 00:04:37.189 Calling clear_ublk_subsystem 00:04:37.189 Calling clear_vhost_blk_subsystem 00:04:37.189 Calling clear_vhost_scsi_subsystem 00:04:37.189 Calling clear_bdev_subsystem 00:04:37.189 19:33:54 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:37.189 19:33:54 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:37.189 19:33:54 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:37.189 19:33:54 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:37.189 19:33:54 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:37.189 19:33:54 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:37.753 19:33:54 json_config -- json_config/json_config.sh@349 -- # break 00:04:37.753 19:33:54 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:37.753 19:33:54 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:37.753 19:33:54 json_config -- json_config/common.sh@31 -- # local app=target 00:04:37.753 19:33:54 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:37.753 19:33:54 json_config -- json_config/common.sh@35 -- # [[ -n 1050803 ]] 00:04:37.753 19:33:54 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1050803 00:04:37.753 19:33:54 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:37.753 19:33:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.753 19:33:54 json_config -- json_config/common.sh@41 -- # kill -0 1050803 00:04:37.753 19:33:54 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.318 19:33:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.318 19:33:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.318 19:33:55 json_config -- json_config/common.sh@41 -- # kill -0 1050803 00:04:38.318 19:33:55 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:38.318 19:33:55 json_config -- json_config/common.sh@43 -- # break 00:04:38.318 19:33:55 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:38.318 19:33:55 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:38.318 SPDK target shutdown done 00:04:38.318 19:33:55 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:38.318 INFO: relaunching applications... 00:04:38.318 19:33:55 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.318 19:33:55 json_config -- json_config/common.sh@9 -- # local app=target 00:04:38.318 19:33:55 json_config -- json_config/common.sh@10 -- # shift 00:04:38.318 19:33:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:38.318 19:33:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:38.318 19:33:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:38.318 19:33:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.318 19:33:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:38.318 19:33:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1052025 00:04:38.318 19:33:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.318 19:33:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:38.318 Waiting for target to run... 00:04:38.318 19:33:55 json_config -- json_config/common.sh@25 -- # waitforlisten 1052025 /var/tmp/spdk_tgt.sock 00:04:38.318 19:33:55 json_config -- common/autotest_common.sh@832 -- # '[' -z 1052025 ']' 00:04:38.318 19:33:55 json_config -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:38.318 19:33:55 json_config -- common/autotest_common.sh@837 -- # local max_retries=100 00:04:38.318 19:33:55 json_config -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:38.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:38.318 19:33:55 json_config -- common/autotest_common.sh@841 -- # xtrace_disable 00:04:38.318 19:33:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.318 [2024-07-24 19:33:55.472967] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:38.318 [2024-07-24 19:33:55.473065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1052025 ] 00:04:38.318 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.911 [2024-07-24 19:33:55.997003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.911 [2024-07-24 19:33:56.104462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.187 [2024-07-24 19:33:59.148223] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.187 [2024-07-24 19:33:59.180741] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:42.752 19:33:59 json_config -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:04:42.752 19:33:59 json_config -- common/autotest_common.sh@865 -- # return 0 00:04:42.752 19:33:59 json_config -- json_config/common.sh@26 -- # echo '' 00:04:42.752 00:04:42.752 19:33:59 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:42.752 19:33:59 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:42.752 INFO: Checking if target configuration is the same... 00:04:42.752 19:33:59 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.752 19:33:59 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:42.752 19:33:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:42.752 + '[' 2 -ne 2 ']' 00:04:42.752 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:42.752 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:42.752 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:42.752 +++ basename /dev/fd/62 00:04:42.752 ++ mktemp /tmp/62.XXX 00:04:42.752 + tmp_file_1=/tmp/62.h7e 00:04:42.752 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:42.752 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:42.752 + tmp_file_2=/tmp/spdk_tgt_config.json.i68 00:04:42.752 + ret=0 00:04:42.752 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.009 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.009 + diff -u /tmp/62.h7e /tmp/spdk_tgt_config.json.i68 00:04:43.009 + echo 'INFO: JSON config files are the same' 00:04:43.009 INFO: JSON config files are the same 00:04:43.009 + rm /tmp/62.h7e /tmp/spdk_tgt_config.json.i68 00:04:43.009 + exit 0 00:04:43.009 19:34:00 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:43.009 19:34:00 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:43.009 INFO: changing configuration and checking if this can be detected... 00:04:43.009 19:34:00 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:43.009 19:34:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:43.266 19:34:00 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.266 19:34:00 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:43.266 19:34:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:43.266 + '[' 2 -ne 2 ']' 00:04:43.266 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:43.266 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:43.266 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:43.266 +++ basename /dev/fd/62 00:04:43.266 ++ mktemp /tmp/62.XXX 00:04:43.266 + tmp_file_1=/tmp/62.GxX 00:04:43.266 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:43.266 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:43.266 + tmp_file_2=/tmp/spdk_tgt_config.json.xcL 00:04:43.266 + ret=0 00:04:43.266 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.830 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:43.830 + diff -u /tmp/62.GxX /tmp/spdk_tgt_config.json.xcL 00:04:43.830 + ret=1 00:04:43.830 + echo '=== Start of file: /tmp/62.GxX ===' 00:04:43.830 + cat /tmp/62.GxX 00:04:43.830 + echo '=== End of file: /tmp/62.GxX ===' 00:04:43.830 + echo '' 00:04:43.830 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xcL ===' 00:04:43.830 + cat /tmp/spdk_tgt_config.json.xcL 00:04:43.830 + echo '=== End of file: /tmp/spdk_tgt_config.json.xcL ===' 00:04:43.830 + echo '' 00:04:43.830 + rm /tmp/62.GxX /tmp/spdk_tgt_config.json.xcL 00:04:43.830 + exit 1 00:04:43.830 19:34:01 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:43.830 INFO: configuration change detected. 00:04:43.830 19:34:01 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:43.830 19:34:01 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@725 -- # xtrace_disable 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.830 19:34:01 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:43.830 19:34:01 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:43.830 19:34:01 json_config -- json_config/json_config.sh@321 -- # [[ -n 1052025 ]] 00:04:43.830 19:34:01 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:43.830 19:34:01 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@725 -- # xtrace_disable 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.830 19:34:01 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:43.830 19:34:01 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:43.830 19:34:01 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:43.830 19:34:01 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:43.830 19:34:01 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:43.830 19:34:01 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@731 -- # xtrace_disable 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.830 19:34:01 json_config -- json_config/json_config.sh@327 -- # killprocess 1052025 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@951 -- # '[' -z 1052025 ']' 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@955 -- # kill -0 1052025 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@956 -- # uname 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1052025 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1052025' 00:04:43.830 killing process with pid 1052025 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@970 -- # kill 1052025 00:04:43.830 19:34:01 json_config -- common/autotest_common.sh@975 -- # wait 1052025 00:04:45.728 19:34:02 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:45.728 19:34:02 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:45.728 19:34:02 json_config -- common/autotest_common.sh@731 -- # xtrace_disable 00:04:45.728 19:34:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.728 19:34:02 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:45.728 19:34:02 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:45.728 INFO: Success 00:04:45.728 00:04:45.728 real 0m16.791s 00:04:45.728 user 0m18.584s 00:04:45.728 sys 0m2.277s 00:04:45.728 19:34:02 json_config -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:45.728 19:34:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.728 ************************************ 00:04:45.728 END TEST json_config 00:04:45.728 ************************************ 00:04:45.728 19:34:02 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:45.728 19:34:02 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:45.728 19:34:02 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:45.728 19:34:02 -- common/autotest_common.sh@10 -- # set +x 00:04:45.728 ************************************ 00:04:45.728 START TEST json_config_extra_key 00:04:45.728 ************************************ 00:04:45.728 19:34:02 json_config_extra_key -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:45.728 19:34:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:45.728 19:34:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:45.728 19:34:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.728 19:34:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.728 19:34:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.728 19:34:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:45.729 19:34:02 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.729 19:34:02 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.729 19:34:02 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.729 19:34:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.729 19:34:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.729 19:34:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.729 19:34:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:45.729 19:34:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:45.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:45.729 19:34:02 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:45.729 19:34:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:45.729 19:34:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:45.729 19:34:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:45.729 19:34:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:45.729 19:34:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:45.729 19:34:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:45.729 19:34:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:45.729 19:34:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:45.729 19:34:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:45.729 19:34:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:45.729 19:34:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:45.729 INFO: launching applications... 00:04:45.729 19:34:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:45.729 19:34:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:45.729 19:34:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:45.729 19:34:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:45.729 19:34:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:45.729 19:34:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:45.729 19:34:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.729 19:34:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.729 19:34:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1053039 00:04:45.729 19:34:02 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:45.729 19:34:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:45.729 Waiting for target to run... 00:04:45.729 19:34:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1053039 /var/tmp/spdk_tgt.sock 00:04:45.729 19:34:02 json_config_extra_key -- common/autotest_common.sh@832 -- # '[' -z 1053039 ']' 00:04:45.729 19:34:02 json_config_extra_key -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.729 19:34:02 json_config_extra_key -- common/autotest_common.sh@837 -- # local max_retries=100 00:04:45.729 19:34:02 json_config_extra_key -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.729 19:34:02 json_config_extra_key -- common/autotest_common.sh@841 -- # xtrace_disable 00:04:45.729 19:34:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.729 [2024-07-24 19:34:02.914469] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:45.729 [2024-07-24 19:34:02.914553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1053039 ] 00:04:45.729 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.987 [2024-07-24 19:34:03.278342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.987 [2024-07-24 19:34:03.367609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.551 19:34:03 json_config_extra_key -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:04:46.551 19:34:03 json_config_extra_key -- common/autotest_common.sh@865 -- # return 0 00:04:46.551 19:34:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:46.551 00:04:46.551 19:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:46.551 INFO: shutting down applications... 00:04:46.551 19:34:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:46.551 19:34:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:46.551 19:34:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:46.551 19:34:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1053039 ]] 00:04:46.551 19:34:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1053039 00:04:46.551 19:34:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:46.551 19:34:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:46.551 19:34:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1053039 00:04:46.551 19:34:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:47.117 19:34:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:47.117 19:34:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:47.117 19:34:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1053039 00:04:47.117 19:34:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:47.117 19:34:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:47.117 19:34:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:47.117 19:34:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:47.117 SPDK target shutdown done 00:04:47.117 19:34:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:47.117 Success 00:04:47.117 00:04:47.117 real 0m1.534s 00:04:47.117 user 0m1.504s 00:04:47.117 sys 0m0.458s 00:04:47.117 19:34:04 json_config_extra_key -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:47.117 19:34:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.117 ************************************ 00:04:47.117 END TEST json_config_extra_key 00:04:47.117 ************************************ 00:04:47.117 19:34:04 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.117 19:34:04 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:47.117 19:34:04 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:47.117 19:34:04 -- common/autotest_common.sh@10 -- # set +x 00:04:47.117 ************************************ 00:04:47.117 START TEST alias_rpc 00:04:47.117 ************************************ 00:04:47.117 19:34:04 alias_rpc -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:47.117 * Looking for test storage... 00:04:47.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:47.117 19:34:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:47.117 19:34:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1053245 00:04:47.117 19:34:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:47.117 19:34:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1053245 00:04:47.117 19:34:04 alias_rpc -- common/autotest_common.sh@832 -- # '[' -z 1053245 ']' 00:04:47.117 19:34:04 alias_rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.117 19:34:04 alias_rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:04:47.117 19:34:04 alias_rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.117 19:34:04 alias_rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:04:47.117 19:34:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:47.117 [2024-07-24 19:34:04.480098] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:47.117 [2024-07-24 19:34:04.480207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1053245 ] 00:04:47.375 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.375 [2024-07-24 19:34:04.539028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.375 [2024-07-24 19:34:04.644829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.633 19:34:04 alias_rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:04:47.633 19:34:04 alias_rpc -- common/autotest_common.sh@865 -- # return 0 00:04:47.633 19:34:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:47.891 19:34:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1053245 00:04:47.891 19:34:05 alias_rpc -- common/autotest_common.sh@951 -- # '[' -z 1053245 ']' 00:04:47.891 19:34:05 alias_rpc -- common/autotest_common.sh@955 -- # kill -0 1053245 00:04:47.891 19:34:05 alias_rpc -- common/autotest_common.sh@956 -- # uname 00:04:47.891 19:34:05 alias_rpc -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:04:47.891 19:34:05 alias_rpc -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1053245 00:04:47.891 19:34:05 alias_rpc -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:04:47.891 19:34:05 alias_rpc -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:04:47.891 19:34:05 alias_rpc -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1053245' 00:04:47.891 killing process with pid 1053245 00:04:47.891 19:34:05 alias_rpc -- common/autotest_common.sh@970 -- # kill 1053245 00:04:47.891 19:34:05 alias_rpc -- common/autotest_common.sh@975 -- # wait 1053245 00:04:48.457 00:04:48.457 real 0m1.272s 00:04:48.457 user 0m1.351s 00:04:48.457 sys 0m0.421s 00:04:48.457 19:34:05 alias_rpc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:48.457 19:34:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.457 ************************************ 00:04:48.457 END TEST alias_rpc 00:04:48.457 ************************************ 00:04:48.457 19:34:05 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:48.457 19:34:05 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:48.457 19:34:05 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:48.457 19:34:05 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:48.457 19:34:05 -- common/autotest_common.sh@10 -- # set +x 00:04:48.457 ************************************ 00:04:48.457 START TEST spdkcli_tcp 00:04:48.457 ************************************ 00:04:48.457 19:34:05 spdkcli_tcp -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:48.457 * Looking for test storage... 00:04:48.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:48.457 19:34:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:48.457 19:34:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:48.457 19:34:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:48.457 19:34:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:48.457 19:34:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:48.457 19:34:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:48.457 19:34:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:48.457 19:34:05 spdkcli_tcp -- common/autotest_common.sh@725 -- # xtrace_disable 00:04:48.457 19:34:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.457 19:34:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1053535 00:04:48.457 19:34:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:48.457 19:34:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1053535 00:04:48.457 19:34:05 spdkcli_tcp -- common/autotest_common.sh@832 -- # '[' -z 1053535 ']' 00:04:48.457 19:34:05 spdkcli_tcp -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.457 19:34:05 spdkcli_tcp -- common/autotest_common.sh@837 -- # local max_retries=100 00:04:48.457 19:34:05 spdkcli_tcp -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.457 19:34:05 spdkcli_tcp -- common/autotest_common.sh@841 -- # xtrace_disable 00:04:48.457 19:34:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:48.457 [2024-07-24 19:34:05.807333] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:48.457 [2024-07-24 19:34:05.807428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1053535 ] 00:04:48.457 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.715 [2024-07-24 19:34:05.866417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:48.715 [2024-07-24 19:34:05.974998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.715 [2024-07-24 19:34:05.975001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.973 19:34:06 spdkcli_tcp -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:04:48.973 19:34:06 spdkcli_tcp -- common/autotest_common.sh@865 -- # return 0 00:04:48.973 19:34:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1053550 00:04:48.973 19:34:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:48.973 19:34:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:49.231 [ 00:04:49.231 "bdev_malloc_delete", 00:04:49.231 "bdev_malloc_create", 00:04:49.231 "bdev_null_resize", 00:04:49.231 "bdev_null_delete", 00:04:49.231 "bdev_null_create", 00:04:49.231 "bdev_nvme_cuse_unregister", 00:04:49.231 "bdev_nvme_cuse_register", 00:04:49.232 "bdev_opal_new_user", 00:04:49.232 "bdev_opal_set_lock_state", 00:04:49.232 "bdev_opal_delete", 00:04:49.232 "bdev_opal_get_info", 00:04:49.232 "bdev_opal_create", 00:04:49.232 "bdev_nvme_opal_revert", 00:04:49.232 "bdev_nvme_opal_init", 00:04:49.232 "bdev_nvme_send_cmd", 00:04:49.232 "bdev_nvme_get_path_iostat", 00:04:49.232 "bdev_nvme_get_mdns_discovery_info", 00:04:49.232 "bdev_nvme_stop_mdns_discovery", 00:04:49.232 "bdev_nvme_start_mdns_discovery", 00:04:49.232 "bdev_nvme_set_multipath_policy", 00:04:49.232 "bdev_nvme_set_preferred_path", 00:04:49.232 "bdev_nvme_get_io_paths", 00:04:49.232 "bdev_nvme_remove_error_injection", 00:04:49.232 "bdev_nvme_add_error_injection", 00:04:49.232 "bdev_nvme_get_discovery_info", 00:04:49.232 "bdev_nvme_stop_discovery", 00:04:49.232 "bdev_nvme_start_discovery", 00:04:49.232 "bdev_nvme_get_controller_health_info", 00:04:49.232 "bdev_nvme_disable_controller", 00:04:49.232 "bdev_nvme_enable_controller", 00:04:49.232 "bdev_nvme_reset_controller", 00:04:49.232 "bdev_nvme_get_transport_statistics", 00:04:49.232 "bdev_nvme_apply_firmware", 00:04:49.232 "bdev_nvme_detach_controller", 00:04:49.232 "bdev_nvme_get_controllers", 00:04:49.232 "bdev_nvme_attach_controller", 00:04:49.232 "bdev_nvme_set_hotplug", 00:04:49.232 "bdev_nvme_set_options", 00:04:49.232 "bdev_passthru_delete", 00:04:49.232 "bdev_passthru_create", 00:04:49.232 "bdev_lvol_set_parent_bdev", 00:04:49.232 "bdev_lvol_set_parent", 00:04:49.232 "bdev_lvol_check_shallow_copy", 00:04:49.232 "bdev_lvol_start_shallow_copy", 00:04:49.232 "bdev_lvol_grow_lvstore", 00:04:49.232 "bdev_lvol_get_lvols", 00:04:49.232 "bdev_lvol_get_lvstores", 00:04:49.232 "bdev_lvol_delete", 00:04:49.232 "bdev_lvol_set_read_only", 00:04:49.232 "bdev_lvol_resize", 00:04:49.232 "bdev_lvol_decouple_parent", 00:04:49.232 "bdev_lvol_inflate", 00:04:49.232 "bdev_lvol_rename", 00:04:49.232 "bdev_lvol_clone_bdev", 00:04:49.232 "bdev_lvol_clone", 00:04:49.232 "bdev_lvol_snapshot", 00:04:49.232 "bdev_lvol_create", 00:04:49.232 "bdev_lvol_delete_lvstore", 00:04:49.232 "bdev_lvol_rename_lvstore", 00:04:49.232 "bdev_lvol_create_lvstore", 00:04:49.232 "bdev_raid_set_options", 00:04:49.232 "bdev_raid_remove_base_bdev", 00:04:49.232 "bdev_raid_add_base_bdev", 00:04:49.232 "bdev_raid_delete", 00:04:49.232 "bdev_raid_create", 00:04:49.232 "bdev_raid_get_bdevs", 00:04:49.232 "bdev_error_inject_error", 00:04:49.232 "bdev_error_delete", 00:04:49.232 "bdev_error_create", 00:04:49.232 "bdev_split_delete", 00:04:49.232 "bdev_split_create", 00:04:49.232 "bdev_delay_delete", 00:04:49.232 "bdev_delay_create", 00:04:49.232 "bdev_delay_update_latency", 00:04:49.232 "bdev_zone_block_delete", 00:04:49.232 "bdev_zone_block_create", 00:04:49.232 "blobfs_create", 00:04:49.232 "blobfs_detect", 00:04:49.232 "blobfs_set_cache_size", 00:04:49.232 "bdev_aio_delete", 00:04:49.232 "bdev_aio_rescan", 00:04:49.232 "bdev_aio_create", 00:04:49.232 "bdev_ftl_set_property", 00:04:49.232 "bdev_ftl_get_properties", 00:04:49.232 "bdev_ftl_get_stats", 00:04:49.232 "bdev_ftl_unmap", 00:04:49.232 "bdev_ftl_unload", 00:04:49.232 "bdev_ftl_delete", 00:04:49.232 "bdev_ftl_load", 00:04:49.232 "bdev_ftl_create", 00:04:49.232 "bdev_virtio_attach_controller", 00:04:49.232 "bdev_virtio_scsi_get_devices", 00:04:49.232 "bdev_virtio_detach_controller", 00:04:49.232 "bdev_virtio_blk_set_hotplug", 00:04:49.232 "bdev_iscsi_delete", 00:04:49.232 "bdev_iscsi_create", 00:04:49.232 "bdev_iscsi_set_options", 00:04:49.232 "accel_error_inject_error", 00:04:49.232 "ioat_scan_accel_module", 00:04:49.232 "dsa_scan_accel_module", 00:04:49.232 "iaa_scan_accel_module", 00:04:49.232 "vfu_virtio_create_scsi_endpoint", 00:04:49.232 "vfu_virtio_scsi_remove_target", 00:04:49.232 "vfu_virtio_scsi_add_target", 00:04:49.232 "vfu_virtio_create_blk_endpoint", 00:04:49.232 "vfu_virtio_delete_endpoint", 00:04:49.232 "keyring_file_remove_key", 00:04:49.232 "keyring_file_add_key", 00:04:49.232 "keyring_linux_set_options", 00:04:49.232 "iscsi_get_histogram", 00:04:49.232 "iscsi_enable_histogram", 00:04:49.232 "iscsi_set_options", 00:04:49.232 "iscsi_get_auth_groups", 00:04:49.232 "iscsi_auth_group_remove_secret", 00:04:49.232 "iscsi_auth_group_add_secret", 00:04:49.232 "iscsi_delete_auth_group", 00:04:49.232 "iscsi_create_auth_group", 00:04:49.232 "iscsi_set_discovery_auth", 00:04:49.232 "iscsi_get_options", 00:04:49.232 "iscsi_target_node_request_logout", 00:04:49.232 "iscsi_target_node_set_redirect", 00:04:49.232 "iscsi_target_node_set_auth", 00:04:49.232 "iscsi_target_node_add_lun", 00:04:49.232 "iscsi_get_stats", 00:04:49.232 "iscsi_get_connections", 00:04:49.232 "iscsi_portal_group_set_auth", 00:04:49.232 "iscsi_start_portal_group", 00:04:49.232 "iscsi_delete_portal_group", 00:04:49.232 "iscsi_create_portal_group", 00:04:49.232 "iscsi_get_portal_groups", 00:04:49.232 "iscsi_delete_target_node", 00:04:49.232 "iscsi_target_node_remove_pg_ig_maps", 00:04:49.232 "iscsi_target_node_add_pg_ig_maps", 00:04:49.232 "iscsi_create_target_node", 00:04:49.232 "iscsi_get_target_nodes", 00:04:49.232 "iscsi_delete_initiator_group", 00:04:49.232 "iscsi_initiator_group_remove_initiators", 00:04:49.232 "iscsi_initiator_group_add_initiators", 00:04:49.232 "iscsi_create_initiator_group", 00:04:49.232 "iscsi_get_initiator_groups", 00:04:49.232 "nvmf_set_crdt", 00:04:49.232 "nvmf_set_config", 00:04:49.232 "nvmf_set_max_subsystems", 00:04:49.232 "nvmf_stop_mdns_prr", 00:04:49.232 "nvmf_publish_mdns_prr", 00:04:49.232 "nvmf_subsystem_get_listeners", 00:04:49.232 "nvmf_subsystem_get_qpairs", 00:04:49.232 "nvmf_subsystem_get_controllers", 00:04:49.232 "nvmf_get_stats", 00:04:49.232 "nvmf_get_transports", 00:04:49.232 "nvmf_create_transport", 00:04:49.232 "nvmf_get_targets", 00:04:49.232 "nvmf_delete_target", 00:04:49.232 "nvmf_create_target", 00:04:49.232 "nvmf_subsystem_allow_any_host", 00:04:49.232 "nvmf_subsystem_remove_host", 00:04:49.232 "nvmf_subsystem_add_host", 00:04:49.232 "nvmf_ns_remove_host", 00:04:49.232 "nvmf_ns_add_host", 00:04:49.232 "nvmf_subsystem_remove_ns", 00:04:49.232 "nvmf_subsystem_add_ns", 00:04:49.232 "nvmf_subsystem_listener_set_ana_state", 00:04:49.232 "nvmf_discovery_get_referrals", 00:04:49.232 "nvmf_discovery_remove_referral", 00:04:49.232 "nvmf_discovery_add_referral", 00:04:49.232 "nvmf_subsystem_remove_listener", 00:04:49.232 "nvmf_subsystem_add_listener", 00:04:49.232 "nvmf_delete_subsystem", 00:04:49.232 "nvmf_create_subsystem", 00:04:49.232 "nvmf_get_subsystems", 00:04:49.232 "env_dpdk_get_mem_stats", 00:04:49.232 "nbd_get_disks", 00:04:49.232 "nbd_stop_disk", 00:04:49.232 "nbd_start_disk", 00:04:49.232 "ublk_recover_disk", 00:04:49.232 "ublk_get_disks", 00:04:49.232 "ublk_stop_disk", 00:04:49.232 "ublk_start_disk", 00:04:49.232 "ublk_destroy_target", 00:04:49.232 "ublk_create_target", 00:04:49.232 "virtio_blk_create_transport", 00:04:49.232 "virtio_blk_get_transports", 00:04:49.232 "vhost_controller_set_coalescing", 00:04:49.232 "vhost_get_controllers", 00:04:49.232 "vhost_delete_controller", 00:04:49.232 "vhost_create_blk_controller", 00:04:49.232 "vhost_scsi_controller_remove_target", 00:04:49.232 "vhost_scsi_controller_add_target", 00:04:49.232 "vhost_start_scsi_controller", 00:04:49.232 "vhost_create_scsi_controller", 00:04:49.232 "thread_set_cpumask", 00:04:49.232 "framework_get_governor", 00:04:49.232 "framework_get_scheduler", 00:04:49.232 "framework_set_scheduler", 00:04:49.232 "framework_get_reactors", 00:04:49.232 "thread_get_io_channels", 00:04:49.232 "thread_get_pollers", 00:04:49.232 "thread_get_stats", 00:04:49.232 "framework_monitor_context_switch", 00:04:49.232 "spdk_kill_instance", 00:04:49.232 "log_enable_timestamps", 00:04:49.232 "log_get_flags", 00:04:49.232 "log_clear_flag", 00:04:49.232 "log_set_flag", 00:04:49.232 "log_get_level", 00:04:49.232 "log_set_level", 00:04:49.232 "log_get_print_level", 00:04:49.232 "log_set_print_level", 00:04:49.232 "framework_enable_cpumask_locks", 00:04:49.232 "framework_disable_cpumask_locks", 00:04:49.232 "framework_wait_init", 00:04:49.232 "framework_start_init", 00:04:49.232 "scsi_get_devices", 00:04:49.232 "bdev_get_histogram", 00:04:49.232 "bdev_enable_histogram", 00:04:49.232 "bdev_set_qos_limit", 00:04:49.232 "bdev_set_qd_sampling_period", 00:04:49.232 "bdev_get_bdevs", 00:04:49.232 "bdev_reset_iostat", 00:04:49.232 "bdev_get_iostat", 00:04:49.232 "bdev_examine", 00:04:49.232 "bdev_wait_for_examine", 00:04:49.232 "bdev_set_options", 00:04:49.232 "notify_get_notifications", 00:04:49.232 "notify_get_types", 00:04:49.232 "accel_get_stats", 00:04:49.232 "accel_set_options", 00:04:49.232 "accel_set_driver", 00:04:49.232 "accel_crypto_key_destroy", 00:04:49.232 "accel_crypto_keys_get", 00:04:49.232 "accel_crypto_key_create", 00:04:49.232 "accel_assign_opc", 00:04:49.232 "accel_get_module_info", 00:04:49.232 "accel_get_opc_assignments", 00:04:49.232 "vmd_rescan", 00:04:49.232 "vmd_remove_device", 00:04:49.232 "vmd_enable", 00:04:49.232 "sock_get_default_impl", 00:04:49.232 "sock_set_default_impl", 00:04:49.232 "sock_impl_set_options", 00:04:49.233 "sock_impl_get_options", 00:04:49.233 "iobuf_get_stats", 00:04:49.233 "iobuf_set_options", 00:04:49.233 "keyring_get_keys", 00:04:49.233 "framework_get_pci_devices", 00:04:49.233 "framework_get_config", 00:04:49.233 "framework_get_subsystems", 00:04:49.233 "vfu_tgt_set_base_path", 00:04:49.233 "trace_get_info", 00:04:49.233 "trace_get_tpoint_group_mask", 00:04:49.233 "trace_disable_tpoint_group", 00:04:49.233 "trace_enable_tpoint_group", 00:04:49.233 "trace_clear_tpoint_mask", 00:04:49.233 "trace_set_tpoint_mask", 00:04:49.233 "spdk_get_version", 00:04:49.233 "rpc_get_methods" 00:04:49.233 ] 00:04:49.233 19:34:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:49.233 19:34:06 spdkcli_tcp -- common/autotest_common.sh@731 -- # xtrace_disable 00:04:49.233 19:34:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.233 19:34:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:49.233 19:34:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1053535 00:04:49.233 19:34:06 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' -z 1053535 ']' 00:04:49.233 19:34:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # kill -0 1053535 00:04:49.233 19:34:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # uname 00:04:49.233 19:34:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:04:49.233 19:34:06 spdkcli_tcp -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1053535 00:04:49.233 19:34:06 spdkcli_tcp -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:04:49.233 19:34:06 spdkcli_tcp -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:04:49.233 19:34:06 spdkcli_tcp -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1053535' 00:04:49.233 killing process with pid 1053535 00:04:49.233 19:34:06 spdkcli_tcp -- common/autotest_common.sh@970 -- # kill 1053535 00:04:49.233 19:34:06 spdkcli_tcp -- common/autotest_common.sh@975 -- # wait 1053535 00:04:49.799 00:04:49.799 real 0m1.295s 00:04:49.799 user 0m2.221s 00:04:49.799 sys 0m0.478s 00:04:49.799 19:34:06 spdkcli_tcp -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:49.799 19:34:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.799 ************************************ 00:04:49.799 END TEST spdkcli_tcp 00:04:49.799 ************************************ 00:04:49.799 19:34:07 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.799 19:34:07 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:49.799 19:34:07 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:49.799 19:34:07 -- common/autotest_common.sh@10 -- # set +x 00:04:49.799 ************************************ 00:04:49.799 START TEST dpdk_mem_utility 00:04:49.799 ************************************ 00:04:49.799 19:34:07 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:49.799 * Looking for test storage... 00:04:49.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:49.799 19:34:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:49.799 19:34:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1053745 00:04:49.799 19:34:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.799 19:34:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1053745 00:04:49.799 19:34:07 dpdk_mem_utility -- common/autotest_common.sh@832 -- # '[' -z 1053745 ']' 00:04:49.799 19:34:07 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.799 19:34:07 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local max_retries=100 00:04:49.799 19:34:07 dpdk_mem_utility -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.799 19:34:07 dpdk_mem_utility -- common/autotest_common.sh@841 -- # xtrace_disable 00:04:49.799 19:34:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:49.799 [2024-07-24 19:34:07.149475] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:49.799 [2024-07-24 19:34:07.149586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1053745 ] 00:04:49.799 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.058 [2024-07-24 19:34:07.210072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.058 [2024-07-24 19:34:07.314852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.993 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:04:50.993 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@865 -- # return 0 00:04:50.993 19:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:50.993 19:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:50.993 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:50.993 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:50.993 { 00:04:50.993 "filename": "/tmp/spdk_mem_dump.txt" 00:04:50.993 } 00:04:50.993 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:50.993 19:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:50.993 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:50.993 1 heaps totaling size 814.000000 MiB 00:04:50.993 size: 814.000000 MiB heap id: 0 00:04:50.993 end heaps---------- 00:04:50.993 8 mempools totaling size 598.116089 MiB 00:04:50.993 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:50.993 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:50.993 size: 84.521057 MiB name: bdev_io_1053745 00:04:50.993 size: 51.011292 MiB name: evtpool_1053745 00:04:50.993 size: 50.003479 MiB name: msgpool_1053745 00:04:50.993 size: 21.763794 MiB name: PDU_Pool 00:04:50.993 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:50.993 size: 0.026123 MiB name: Session_Pool 00:04:50.993 end mempools------- 00:04:50.993 6 memzones totaling size 4.142822 MiB 00:04:50.993 size: 1.000366 MiB name: RG_ring_0_1053745 00:04:50.993 size: 1.000366 MiB name: RG_ring_1_1053745 00:04:50.993 size: 1.000366 MiB name: RG_ring_4_1053745 00:04:50.993 size: 1.000366 MiB name: RG_ring_5_1053745 00:04:50.993 size: 0.125366 MiB name: RG_ring_2_1053745 00:04:50.993 size: 0.015991 MiB name: RG_ring_3_1053745 00:04:50.993 end memzones------- 00:04:50.993 19:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:50.993 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:50.993 list of free elements. size: 12.519348 MiB 00:04:50.993 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:50.993 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:50.993 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:50.993 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:50.993 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:50.993 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:50.993 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:50.993 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:50.993 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:50.993 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:50.993 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:50.993 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:50.993 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:50.993 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:50.993 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:50.993 list of standard malloc elements. size: 199.218079 MiB 00:04:50.993 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:50.993 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:50.993 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:50.993 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:50.993 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:50.993 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:50.993 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:50.993 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:50.993 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:50.993 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:50.993 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:50.993 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:50.993 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:50.993 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:50.993 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:50.993 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:50.993 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:50.993 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:50.993 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:50.993 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:50.993 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:50.993 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:50.993 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:50.993 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:50.993 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:50.993 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:50.993 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:50.993 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:50.993 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:50.993 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:50.993 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:50.993 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:50.993 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:50.993 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:50.993 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:50.993 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:50.993 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:50.993 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:50.993 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:50.993 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:50.993 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:50.993 list of memzone associated elements. size: 602.262573 MiB 00:04:50.993 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:50.993 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:50.993 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:50.993 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:50.993 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:50.993 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1053745_0 00:04:50.993 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:50.993 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1053745_0 00:04:50.993 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:50.993 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1053745_0 00:04:50.993 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:50.993 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:50.993 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:50.993 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:50.993 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:50.993 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1053745 00:04:50.993 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:50.993 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1053745 00:04:50.993 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:50.993 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1053745 00:04:50.993 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:50.993 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:50.993 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:50.993 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:50.993 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:50.993 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:50.993 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:50.993 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:50.993 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:50.993 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1053745 00:04:50.993 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:50.993 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1053745 00:04:50.993 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:50.993 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1053745 00:04:50.993 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:50.993 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1053745 00:04:50.993 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:50.993 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1053745 00:04:50.993 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:50.993 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:50.993 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:50.993 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:50.993 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:50.993 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:50.993 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:50.993 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1053745 00:04:50.993 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:50.993 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:50.993 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:50.993 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:50.993 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:50.993 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1053745 00:04:50.994 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:50.994 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:50.994 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:50.994 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1053745 00:04:50.994 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:50.994 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1053745 00:04:50.994 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:50.994 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:50.994 19:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:50.994 19:34:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1053745 00:04:50.994 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' -z 1053745 ']' 00:04:50.994 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@955 -- # kill -0 1053745 00:04:50.994 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # uname 00:04:50.994 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:04:50.994 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1053745 00:04:50.994 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:04:50.994 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:04:50.994 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1053745' 00:04:50.994 killing process with pid 1053745 00:04:50.994 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@970 -- # kill 1053745 00:04:50.994 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@975 -- # wait 1053745 00:04:51.559 00:04:51.559 real 0m1.632s 00:04:51.559 user 0m1.780s 00:04:51.559 sys 0m0.429s 00:04:51.559 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:51.559 19:34:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:51.559 ************************************ 00:04:51.559 END TEST dpdk_mem_utility 00:04:51.559 ************************************ 00:04:51.559 19:34:08 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:51.559 19:34:08 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:51.559 19:34:08 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:51.559 19:34:08 -- common/autotest_common.sh@10 -- # set +x 00:04:51.559 ************************************ 00:04:51.559 START TEST event 00:04:51.559 ************************************ 00:04:51.559 19:34:08 event -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:51.559 * Looking for test storage... 00:04:51.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:51.559 19:34:08 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:51.559 19:34:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:51.559 19:34:08 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:51.559 19:34:08 event -- common/autotest_common.sh@1102 -- # '[' 6 -le 1 ']' 00:04:51.559 19:34:08 event -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:51.559 19:34:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.559 ************************************ 00:04:51.559 START TEST event_perf 00:04:51.559 ************************************ 00:04:51.559 19:34:08 event.event_perf -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:51.559 Running I/O for 1 seconds...[2024-07-24 19:34:08.815747] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:51.559 [2024-07-24 19:34:08.815814] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1053943 ] 00:04:51.559 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.559 [2024-07-24 19:34:08.877046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:51.818 [2024-07-24 19:34:08.998897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.818 [2024-07-24 19:34:08.998947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.818 [2024-07-24 19:34:08.999064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:51.818 [2024-07-24 19:34:08.999067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.750 Running I/O for 1 seconds... 00:04:52.750 lcore 0: 234514 00:04:52.750 lcore 1: 234515 00:04:52.750 lcore 2: 234515 00:04:52.750 lcore 3: 234517 00:04:52.750 done. 00:04:52.750 00:04:52.750 real 0m1.325s 00:04:52.750 user 0m4.228s 00:04:52.750 sys 0m0.092s 00:04:52.750 19:34:10 event.event_perf -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:52.750 19:34:10 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.750 ************************************ 00:04:52.750 END TEST event_perf 00:04:52.750 ************************************ 00:04:53.008 19:34:10 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:53.008 19:34:10 event -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:04:53.008 19:34:10 event -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:53.008 19:34:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.008 ************************************ 00:04:53.008 START TEST event_reactor 00:04:53.008 ************************************ 00:04:53.008 19:34:10 event.event_reactor -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:53.008 [2024-07-24 19:34:10.178719] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:53.008 [2024-07-24 19:34:10.178772] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054205 ] 00:04:53.008 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.008 [2024-07-24 19:34:10.239732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.008 [2024-07-24 19:34:10.358629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.379 test_start 00:04:54.379 oneshot 00:04:54.379 tick 100 00:04:54.379 tick 100 00:04:54.379 tick 250 00:04:54.379 tick 100 00:04:54.379 tick 100 00:04:54.379 tick 100 00:04:54.379 tick 250 00:04:54.379 tick 500 00:04:54.379 tick 100 00:04:54.379 tick 100 00:04:54.379 tick 250 00:04:54.379 tick 100 00:04:54.379 tick 100 00:04:54.379 test_end 00:04:54.379 00:04:54.379 real 0m1.313s 00:04:54.379 user 0m1.230s 00:04:54.379 sys 0m0.079s 00:04:54.379 19:34:11 event.event_reactor -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:54.379 19:34:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:54.379 ************************************ 00:04:54.379 END TEST event_reactor 00:04:54.379 ************************************ 00:04:54.379 19:34:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:54.379 19:34:11 event -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:04:54.379 19:34:11 event -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:54.379 19:34:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.379 ************************************ 00:04:54.379 START TEST event_reactor_perf 00:04:54.379 ************************************ 00:04:54.379 19:34:11 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:54.379 [2024-07-24 19:34:11.539342] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:54.379 [2024-07-24 19:34:11.539401] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054373 ] 00:04:54.379 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.379 [2024-07-24 19:34:11.599677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.379 [2024-07-24 19:34:11.717266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.752 test_start 00:04:55.752 test_end 00:04:55.752 Performance: 358158 events per second 00:04:55.752 00:04:55.752 real 0m1.311s 00:04:55.752 user 0m1.226s 00:04:55.752 sys 0m0.080s 00:04:55.752 19:34:12 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:55.752 19:34:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.752 ************************************ 00:04:55.752 END TEST event_reactor_perf 00:04:55.752 ************************************ 00:04:55.752 19:34:12 event -- event/event.sh@49 -- # uname -s 00:04:55.752 19:34:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:55.752 19:34:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:55.752 19:34:12 event -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:55.752 19:34:12 event -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:55.752 19:34:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.752 ************************************ 00:04:55.752 START TEST event_scheduler 00:04:55.752 ************************************ 00:04:55.752 19:34:12 event.event_scheduler -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:55.752 * Looking for test storage... 00:04:55.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:55.752 19:34:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:55.752 19:34:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1054556 00:04:55.752 19:34:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:55.752 19:34:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.752 19:34:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1054556 00:04:55.752 19:34:12 event.event_scheduler -- common/autotest_common.sh@832 -- # '[' -z 1054556 ']' 00:04:55.752 19:34:12 event.event_scheduler -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.752 19:34:12 event.event_scheduler -- common/autotest_common.sh@837 -- # local max_retries=100 00:04:55.752 19:34:12 event.event_scheduler -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.752 19:34:12 event.event_scheduler -- common/autotest_common.sh@841 -- # xtrace_disable 00:04:55.752 19:34:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:55.752 [2024-07-24 19:34:12.983949] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:55.752 [2024-07-24 19:34:12.984036] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054556 ] 00:04:55.752 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.752 [2024-07-24 19:34:13.042859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:56.010 [2024-07-24 19:34:13.156769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.010 [2024-07-24 19:34:13.156823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.010 [2024-07-24 19:34:13.156889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:56.010 [2024-07-24 19:34:13.156892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.010 19:34:13 event.event_scheduler -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:04:56.010 19:34:13 event.event_scheduler -- common/autotest_common.sh@865 -- # return 0 00:04:56.010 19:34:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:56.010 19:34:13 event.event_scheduler -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:56.010 19:34:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.010 [2024-07-24 19:34:13.193686] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:56.010 [2024-07-24 19:34:13.193713] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:56.010 [2024-07-24 19:34:13.193730] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:56.010 [2024-07-24 19:34:13.193741] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:56.010 [2024-07-24 19:34:13.193750] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:56.010 19:34:13 event.event_scheduler -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:56.010 19:34:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:56.010 19:34:13 event.event_scheduler -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:56.010 19:34:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.010 [2024-07-24 19:34:13.290120] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:56.010 19:34:13 event.event_scheduler -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:56.010 19:34:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:56.010 19:34:13 event.event_scheduler -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:56.010 19:34:13 event.event_scheduler -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:56.010 19:34:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.010 ************************************ 00:04:56.010 START TEST scheduler_create_thread 00:04:56.010 ************************************ 00:04:56.010 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # scheduler_create_thread 00:04:56.010 19:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:56.010 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:56.010 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.010 2 00:04:56.010 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:56.010 19:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:56.010 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:56.010 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.010 3 00:04:56.010 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:56.010 19:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:56.010 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:56.010 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.010 4 00:04:56.010 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:56.010 19:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:56.010 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.011 5 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.011 6 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.011 7 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.011 8 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.011 9 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:56.011 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.268 10 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@562 -- # xtrace_disable 00:04:56.268 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.861 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:04:56.861 00:04:56.861 real 0m0.590s 00:04:56.861 user 0m0.007s 00:04:56.861 sys 0m0.006s 00:04:56.861 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:56.861 19:34:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.861 ************************************ 00:04:56.861 END TEST scheduler_create_thread 00:04:56.861 ************************************ 00:04:56.861 19:34:13 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:56.861 19:34:13 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1054556 00:04:56.861 19:34:13 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' -z 1054556 ']' 00:04:56.861 19:34:13 event.event_scheduler -- common/autotest_common.sh@955 -- # kill -0 1054556 00:04:56.861 19:34:13 event.event_scheduler -- common/autotest_common.sh@956 -- # uname 00:04:56.861 19:34:13 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:04:56.861 19:34:13 event.event_scheduler -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1054556 00:04:56.861 19:34:13 event.event_scheduler -- common/autotest_common.sh@957 -- # process_name=reactor_2 00:04:56.861 19:34:13 event.event_scheduler -- common/autotest_common.sh@961 -- # '[' reactor_2 = sudo ']' 00:04:56.861 19:34:13 event.event_scheduler -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1054556' 00:04:56.861 killing process with pid 1054556 00:04:56.861 19:34:13 event.event_scheduler -- common/autotest_common.sh@970 -- # kill 1054556 00:04:56.861 19:34:13 event.event_scheduler -- common/autotest_common.sh@975 -- # wait 1054556 00:04:57.125 [2024-07-24 19:34:14.390327] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:57.383 00:04:57.383 real 0m1.760s 00:04:57.383 user 0m2.193s 00:04:57.383 sys 0m0.325s 00:04:57.383 19:34:14 event.event_scheduler -- common/autotest_common.sh@1127 -- # xtrace_disable 00:04:57.383 19:34:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.383 ************************************ 00:04:57.383 END TEST event_scheduler 00:04:57.383 ************************************ 00:04:57.383 19:34:14 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:57.383 19:34:14 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:57.383 19:34:14 event -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:04:57.383 19:34:14 event -- common/autotest_common.sh@1108 -- # xtrace_disable 00:04:57.383 19:34:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.383 ************************************ 00:04:57.383 START TEST app_repeat 00:04:57.383 ************************************ 00:04:57.383 19:34:14 event.app_repeat -- common/autotest_common.sh@1126 -- # app_repeat_test 00:04:57.383 19:34:14 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.383 19:34:14 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.383 19:34:14 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:57.383 19:34:14 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.383 19:34:14 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:57.383 19:34:14 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:57.383 19:34:14 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:57.383 19:34:14 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1054832 00:04:57.383 19:34:14 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:57.383 19:34:14 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.383 19:34:14 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1054832' 00:04:57.383 Process app_repeat pid: 1054832 00:04:57.383 19:34:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.383 19:34:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:57.383 spdk_app_start Round 0 00:04:57.383 19:34:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1054832 /var/tmp/spdk-nbd.sock 00:04:57.383 19:34:14 event.app_repeat -- common/autotest_common.sh@832 -- # '[' -z 1054832 ']' 00:04:57.383 19:34:14 event.app_repeat -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.383 19:34:14 event.app_repeat -- common/autotest_common.sh@837 -- # local max_retries=100 00:04:57.383 19:34:14 event.app_repeat -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.383 19:34:14 event.app_repeat -- common/autotest_common.sh@841 -- # xtrace_disable 00:04:57.383 19:34:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.383 [2024-07-24 19:34:14.725354] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:04:57.383 [2024-07-24 19:34:14.725419] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054832 ] 00:04:57.383 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.641 [2024-07-24 19:34:14.791056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:57.641 [2024-07-24 19:34:14.907263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.641 [2024-07-24 19:34:14.907268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.641 19:34:15 event.app_repeat -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:04:57.641 19:34:15 event.app_repeat -- common/autotest_common.sh@865 -- # return 0 00:04:57.641 19:34:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.899 Malloc0 00:04:57.899 19:34:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.157 Malloc1 00:04:58.415 19:34:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.415 19:34:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.415 19:34:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.415 19:34:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.415 19:34:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.415 19:34:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.415 19:34:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.415 19:34:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.415 19:34:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.415 19:34:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.415 19:34:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.415 19:34:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.415 19:34:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:58.415 19:34:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.415 19:34:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.415 19:34:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.415 /dev/nbd0 00:04:58.673 19:34:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.673 19:34:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.673 19:34:15 event.app_repeat -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:04:58.673 19:34:15 event.app_repeat -- common/autotest_common.sh@870 -- # local i 00:04:58.673 19:34:15 event.app_repeat -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:58.673 19:34:15 event.app_repeat -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:58.673 19:34:15 event.app_repeat -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:04:58.673 19:34:15 event.app_repeat -- common/autotest_common.sh@874 -- # break 00:04:58.673 19:34:15 event.app_repeat -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:04:58.673 19:34:15 event.app_repeat -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:04:58.673 19:34:15 event.app_repeat -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.673 1+0 records in 00:04:58.673 1+0 records out 00:04:58.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167362 s, 24.5 MB/s 00:04:58.673 19:34:15 event.app_repeat -- common/autotest_common.sh@887 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.673 19:34:15 event.app_repeat -- common/autotest_common.sh@887 -- # size=4096 00:04:58.673 19:34:15 event.app_repeat -- common/autotest_common.sh@888 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.673 19:34:15 event.app_repeat -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:04:58.673 19:34:15 event.app_repeat -- common/autotest_common.sh@890 -- # return 0 00:04:58.673 19:34:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.673 19:34:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.673 19:34:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.930 /dev/nbd1 00:04:58.930 19:34:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.930 19:34:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.930 19:34:16 event.app_repeat -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:04:58.930 19:34:16 event.app_repeat -- common/autotest_common.sh@870 -- # local i 00:04:58.930 19:34:16 event.app_repeat -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:58.930 19:34:16 event.app_repeat -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:58.930 19:34:16 event.app_repeat -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:04:58.930 19:34:16 event.app_repeat -- common/autotest_common.sh@874 -- # break 00:04:58.930 19:34:16 event.app_repeat -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:04:58.930 19:34:16 event.app_repeat -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:04:58.930 19:34:16 event.app_repeat -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.930 1+0 records in 00:04:58.930 1+0 records out 00:04:58.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199387 s, 20.5 MB/s 00:04:58.931 19:34:16 event.app_repeat -- common/autotest_common.sh@887 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.931 19:34:16 event.app_repeat -- common/autotest_common.sh@887 -- # size=4096 00:04:58.931 19:34:16 event.app_repeat -- common/autotest_common.sh@888 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.931 19:34:16 event.app_repeat -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:04:58.931 19:34:16 event.app_repeat -- common/autotest_common.sh@890 -- # return 0 00:04:58.931 19:34:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.931 19:34:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.931 19:34:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.931 19:34:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.931 19:34:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.188 { 00:04:59.188 "nbd_device": "/dev/nbd0", 00:04:59.188 "bdev_name": "Malloc0" 00:04:59.188 }, 00:04:59.188 { 00:04:59.188 "nbd_device": "/dev/nbd1", 00:04:59.188 "bdev_name": "Malloc1" 00:04:59.188 } 00:04:59.188 ]' 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.188 { 00:04:59.188 "nbd_device": "/dev/nbd0", 00:04:59.188 "bdev_name": "Malloc0" 00:04:59.188 }, 00:04:59.188 { 00:04:59.188 "nbd_device": "/dev/nbd1", 00:04:59.188 "bdev_name": "Malloc1" 00:04:59.188 } 00:04:59.188 ]' 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.188 /dev/nbd1' 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.188 /dev/nbd1' 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.188 256+0 records in 00:04:59.188 256+0 records out 00:04:59.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509339 s, 206 MB/s 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.188 256+0 records in 00:04:59.188 256+0 records out 00:04:59.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214833 s, 48.8 MB/s 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.188 256+0 records in 00:04:59.188 256+0 records out 00:04:59.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245911 s, 42.6 MB/s 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.188 19:34:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.189 19:34:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.446 19:34:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.446 19:34:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.446 19:34:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.446 19:34:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.446 19:34:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.446 19:34:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.446 19:34:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.446 19:34:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.446 19:34:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.446 19:34:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:59.704 19:34:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:59.704 19:34:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:59.704 19:34:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:59.704 19:34:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.704 19:34:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.704 19:34:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:59.704 19:34:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.704 19:34:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.704 19:34:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.704 19:34:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.704 19:34:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.962 19:34:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.962 19:34:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.962 19:34:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.962 19:34:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.962 19:34:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.962 19:34:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.962 19:34:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:59.962 19:34:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.962 19:34:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.962 19:34:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.962 19:34:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.962 19:34:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.962 19:34:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.219 19:34:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:00.476 [2024-07-24 19:34:17.837236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.734 [2024-07-24 19:34:17.953400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.734 [2024-07-24 19:34:17.953400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.734 [2024-07-24 19:34:18.012553] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:00.734 [2024-07-24 19:34:18.012672] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.262 19:34:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.262 19:34:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:03.262 spdk_app_start Round 1 00:05:03.262 19:34:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1054832 /var/tmp/spdk-nbd.sock 00:05:03.262 19:34:20 event.app_repeat -- common/autotest_common.sh@832 -- # '[' -z 1054832 ']' 00:05:03.262 19:34:20 event.app_repeat -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.262 19:34:20 event.app_repeat -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:03.262 19:34:20 event.app_repeat -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.262 19:34:20 event.app_repeat -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:03.262 19:34:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.519 19:34:20 event.app_repeat -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:03.520 19:34:20 event.app_repeat -- common/autotest_common.sh@865 -- # return 0 00:05:03.520 19:34:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.777 Malloc0 00:05:03.777 19:34:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.036 Malloc1 00:05:04.036 19:34:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.036 19:34:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.036 19:34:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.036 19:34:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.036 19:34:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.036 19:34:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.036 19:34:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.036 19:34:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.036 19:34:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.036 19:34:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.036 19:34:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.036 19:34:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.036 19:34:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.036 19:34:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.036 19:34:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.036 19:34:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.294 /dev/nbd0 00:05:04.294 19:34:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.294 19:34:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.294 19:34:21 event.app_repeat -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:05:04.294 19:34:21 event.app_repeat -- common/autotest_common.sh@870 -- # local i 00:05:04.294 19:34:21 event.app_repeat -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:04.294 19:34:21 event.app_repeat -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:04.294 19:34:21 event.app_repeat -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:05:04.294 19:34:21 event.app_repeat -- common/autotest_common.sh@874 -- # break 00:05:04.294 19:34:21 event.app_repeat -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:05:04.294 19:34:21 event.app_repeat -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:05:04.294 19:34:21 event.app_repeat -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.294 1+0 records in 00:05:04.294 1+0 records out 00:05:04.294 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222883 s, 18.4 MB/s 00:05:04.294 19:34:21 event.app_repeat -- common/autotest_common.sh@887 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.294 19:34:21 event.app_repeat -- common/autotest_common.sh@887 -- # size=4096 00:05:04.294 19:34:21 event.app_repeat -- common/autotest_common.sh@888 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.294 19:34:21 event.app_repeat -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:05:04.294 19:34:21 event.app_repeat -- common/autotest_common.sh@890 -- # return 0 00:05:04.294 19:34:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.294 19:34:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.294 19:34:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.552 /dev/nbd1 00:05:04.552 19:34:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:04.552 19:34:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:04.552 19:34:21 event.app_repeat -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:05:04.552 19:34:21 event.app_repeat -- common/autotest_common.sh@870 -- # local i 00:05:04.552 19:34:21 event.app_repeat -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:04.552 19:34:21 event.app_repeat -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:04.552 19:34:21 event.app_repeat -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:05:04.552 19:34:21 event.app_repeat -- common/autotest_common.sh@874 -- # break 00:05:04.552 19:34:21 event.app_repeat -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:05:04.552 19:34:21 event.app_repeat -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:05:04.553 19:34:21 event.app_repeat -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.553 1+0 records in 00:05:04.553 1+0 records out 00:05:04.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023447 s, 17.5 MB/s 00:05:04.553 19:34:21 event.app_repeat -- common/autotest_common.sh@887 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.553 19:34:21 event.app_repeat -- common/autotest_common.sh@887 -- # size=4096 00:05:04.553 19:34:21 event.app_repeat -- common/autotest_common.sh@888 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.553 19:34:21 event.app_repeat -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:05:04.553 19:34:21 event.app_repeat -- common/autotest_common.sh@890 -- # return 0 00:05:04.553 19:34:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.553 19:34:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.553 19:34:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.553 19:34:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.553 19:34:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.811 19:34:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.811 { 00:05:04.811 "nbd_device": "/dev/nbd0", 00:05:04.811 "bdev_name": "Malloc0" 00:05:04.811 }, 00:05:04.811 { 00:05:04.811 "nbd_device": "/dev/nbd1", 00:05:04.811 "bdev_name": "Malloc1" 00:05:04.811 } 00:05:04.811 ]' 00:05:04.811 19:34:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.811 { 00:05:04.811 "nbd_device": "/dev/nbd0", 00:05:04.811 "bdev_name": "Malloc0" 00:05:04.811 }, 00:05:04.811 { 00:05:04.811 "nbd_device": "/dev/nbd1", 00:05:04.811 "bdev_name": "Malloc1" 00:05:04.811 } 00:05:04.811 ]' 00:05:04.811 19:34:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.069 /dev/nbd1' 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.069 /dev/nbd1' 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.069 256+0 records in 00:05:05.069 256+0 records out 00:05:05.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463051 s, 226 MB/s 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.069 256+0 records in 00:05:05.069 256+0 records out 00:05:05.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246912 s, 42.5 MB/s 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.069 256+0 records in 00:05:05.069 256+0 records out 00:05:05.069 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229196 s, 45.8 MB/s 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.069 19:34:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.327 19:34:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.327 19:34:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.327 19:34:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.327 19:34:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.327 19:34:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.327 19:34:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.327 19:34:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.327 19:34:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.327 19:34:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.327 19:34:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.585 19:34:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.585 19:34:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.585 19:34:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.585 19:34:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.585 19:34:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.585 19:34:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.585 19:34:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.585 19:34:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.585 19:34:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.585 19:34:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.585 19:34:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.842 19:34:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.842 19:34:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.842 19:34:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.842 19:34:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.842 19:34:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.842 19:34:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.842 19:34:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.842 19:34:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.842 19:34:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.842 19:34:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.842 19:34:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.842 19:34:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.842 19:34:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.105 19:34:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.364 [2024-07-24 19:34:23.664290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.622 [2024-07-24 19:34:23.779413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.622 [2024-07-24 19:34:23.779418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.622 [2024-07-24 19:34:23.842021] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.622 [2024-07-24 19:34:23.842099] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.148 19:34:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.148 19:34:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:09.148 spdk_app_start Round 2 00:05:09.148 19:34:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1054832 /var/tmp/spdk-nbd.sock 00:05:09.148 19:34:26 event.app_repeat -- common/autotest_common.sh@832 -- # '[' -z 1054832 ']' 00:05:09.148 19:34:26 event.app_repeat -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.148 19:34:26 event.app_repeat -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:09.148 19:34:26 event.app_repeat -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.148 19:34:26 event.app_repeat -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:09.148 19:34:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.406 19:34:26 event.app_repeat -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:09.406 19:34:26 event.app_repeat -- common/autotest_common.sh@865 -- # return 0 00:05:09.406 19:34:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.663 Malloc0 00:05:09.663 19:34:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.920 Malloc1 00:05:09.920 19:34:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.920 19:34:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.920 19:34:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.920 19:34:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.920 19:34:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.920 19:34:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.921 19:34:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.921 19:34:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.921 19:34:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.921 19:34:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.921 19:34:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.921 19:34:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.921 19:34:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:09.921 19:34:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.921 19:34:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.921 19:34:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.178 /dev/nbd0 00:05:10.178 19:34:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.178 19:34:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.178 19:34:27 event.app_repeat -- common/autotest_common.sh@869 -- # local nbd_name=nbd0 00:05:10.178 19:34:27 event.app_repeat -- common/autotest_common.sh@870 -- # local i 00:05:10.178 19:34:27 event.app_repeat -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:10.178 19:34:27 event.app_repeat -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:10.178 19:34:27 event.app_repeat -- common/autotest_common.sh@873 -- # grep -q -w nbd0 /proc/partitions 00:05:10.178 19:34:27 event.app_repeat -- common/autotest_common.sh@874 -- # break 00:05:10.178 19:34:27 event.app_repeat -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:05:10.178 19:34:27 event.app_repeat -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:05:10.178 19:34:27 event.app_repeat -- common/autotest_common.sh@886 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.178 1+0 records in 00:05:10.178 1+0 records out 00:05:10.178 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211328 s, 19.4 MB/s 00:05:10.178 19:34:27 event.app_repeat -- common/autotest_common.sh@887 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.178 19:34:27 event.app_repeat -- common/autotest_common.sh@887 -- # size=4096 00:05:10.178 19:34:27 event.app_repeat -- common/autotest_common.sh@888 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.178 19:34:27 event.app_repeat -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:05:10.178 19:34:27 event.app_repeat -- common/autotest_common.sh@890 -- # return 0 00:05:10.178 19:34:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.178 19:34:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.178 19:34:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.435 /dev/nbd1 00:05:10.435 19:34:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.435 19:34:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.435 19:34:27 event.app_repeat -- common/autotest_common.sh@869 -- # local nbd_name=nbd1 00:05:10.435 19:34:27 event.app_repeat -- common/autotest_common.sh@870 -- # local i 00:05:10.435 19:34:27 event.app_repeat -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:10.435 19:34:27 event.app_repeat -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:10.435 19:34:27 event.app_repeat -- common/autotest_common.sh@873 -- # grep -q -w nbd1 /proc/partitions 00:05:10.435 19:34:27 event.app_repeat -- common/autotest_common.sh@874 -- # break 00:05:10.435 19:34:27 event.app_repeat -- common/autotest_common.sh@885 -- # (( i = 1 )) 00:05:10.435 19:34:27 event.app_repeat -- common/autotest_common.sh@885 -- # (( i <= 20 )) 00:05:10.435 19:34:27 event.app_repeat -- common/autotest_common.sh@886 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.435 1+0 records in 00:05:10.435 1+0 records out 00:05:10.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173013 s, 23.7 MB/s 00:05:10.435 19:34:27 event.app_repeat -- common/autotest_common.sh@887 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.435 19:34:27 event.app_repeat -- common/autotest_common.sh@887 -- # size=4096 00:05:10.435 19:34:27 event.app_repeat -- common/autotest_common.sh@888 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.435 19:34:27 event.app_repeat -- common/autotest_common.sh@889 -- # '[' 4096 '!=' 0 ']' 00:05:10.435 19:34:27 event.app_repeat -- common/autotest_common.sh@890 -- # return 0 00:05:10.435 19:34:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.435 19:34:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.435 19:34:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.435 19:34:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.436 19:34:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.693 19:34:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.693 { 00:05:10.693 "nbd_device": "/dev/nbd0", 00:05:10.693 "bdev_name": "Malloc0" 00:05:10.693 }, 00:05:10.693 { 00:05:10.693 "nbd_device": "/dev/nbd1", 00:05:10.693 "bdev_name": "Malloc1" 00:05:10.693 } 00:05:10.693 ]' 00:05:10.693 19:34:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.693 { 00:05:10.693 "nbd_device": "/dev/nbd0", 00:05:10.693 "bdev_name": "Malloc0" 00:05:10.693 }, 00:05:10.693 { 00:05:10.693 "nbd_device": "/dev/nbd1", 00:05:10.693 "bdev_name": "Malloc1" 00:05:10.693 } 00:05:10.693 ]' 00:05:10.693 19:34:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.693 /dev/nbd1' 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.693 /dev/nbd1' 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.693 256+0 records in 00:05:10.693 256+0 records out 00:05:10.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501841 s, 209 MB/s 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.693 256+0 records in 00:05:10.693 256+0 records out 00:05:10.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201157 s, 52.1 MB/s 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.693 256+0 records in 00:05:10.693 256+0 records out 00:05:10.693 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250779 s, 41.8 MB/s 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.693 19:34:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.950 19:34:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.950 19:34:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.950 19:34:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.951 19:34:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.951 19:34:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:10.951 19:34:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.951 19:34:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.208 19:34:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.208 19:34:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.208 19:34:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.208 19:34:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.208 19:34:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.208 19:34:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.208 19:34:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.208 19:34:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.208 19:34:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.208 19:34:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.465 19:34:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.465 19:34:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.465 19:34:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.465 19:34:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.465 19:34:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.465 19:34:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.465 19:34:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.465 19:34:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.465 19:34:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.465 19:34:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.465 19:34:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.722 19:34:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:11.722 19:34:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:11.722 19:34:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.722 19:34:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:11.722 19:34:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:11.722 19:34:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.722 19:34:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:11.722 19:34:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:11.722 19:34:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:11.722 19:34:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:11.722 19:34:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:11.722 19:34:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:11.722 19:34:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.980 19:34:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:12.237 [2024-07-24 19:34:29.454992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:12.237 [2024-07-24 19:34:29.570166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.237 [2024-07-24 19:34:29.570167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.494 [2024-07-24 19:34:29.632847] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:12.494 [2024-07-24 19:34:29.632922] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.064 19:34:32 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1054832 /var/tmp/spdk-nbd.sock 00:05:15.064 19:34:32 event.app_repeat -- common/autotest_common.sh@832 -- # '[' -z 1054832 ']' 00:05:15.064 19:34:32 event.app_repeat -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.064 19:34:32 event.app_repeat -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:15.064 19:34:32 event.app_repeat -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.064 19:34:32 event.app_repeat -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:15.064 19:34:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.064 19:34:32 event.app_repeat -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:15.064 19:34:32 event.app_repeat -- common/autotest_common.sh@865 -- # return 0 00:05:15.064 19:34:32 event.app_repeat -- event/event.sh@39 -- # killprocess 1054832 00:05:15.064 19:34:32 event.app_repeat -- common/autotest_common.sh@951 -- # '[' -z 1054832 ']' 00:05:15.064 19:34:32 event.app_repeat -- common/autotest_common.sh@955 -- # kill -0 1054832 00:05:15.064 19:34:32 event.app_repeat -- common/autotest_common.sh@956 -- # uname 00:05:15.064 19:34:32 event.app_repeat -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:05:15.064 19:34:32 event.app_repeat -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1054832 00:05:15.322 19:34:32 event.app_repeat -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:05:15.322 19:34:32 event.app_repeat -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:05:15.322 19:34:32 event.app_repeat -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1054832' 00:05:15.322 killing process with pid 1054832 00:05:15.322 19:34:32 event.app_repeat -- common/autotest_common.sh@970 -- # kill 1054832 00:05:15.322 19:34:32 event.app_repeat -- common/autotest_common.sh@975 -- # wait 1054832 00:05:15.579 spdk_app_start is called in Round 0. 00:05:15.579 Shutdown signal received, stop current app iteration 00:05:15.579 Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 reinitialization... 00:05:15.579 spdk_app_start is called in Round 1. 00:05:15.579 Shutdown signal received, stop current app iteration 00:05:15.579 Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 reinitialization... 00:05:15.579 spdk_app_start is called in Round 2. 00:05:15.579 Shutdown signal received, stop current app iteration 00:05:15.579 Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 reinitialization... 00:05:15.579 spdk_app_start is called in Round 3. 00:05:15.579 Shutdown signal received, stop current app iteration 00:05:15.579 19:34:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:15.579 19:34:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:15.579 00:05:15.579 real 0m18.018s 00:05:15.579 user 0m38.972s 00:05:15.579 sys 0m3.188s 00:05:15.579 19:34:32 event.app_repeat -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:15.579 19:34:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.579 ************************************ 00:05:15.579 END TEST app_repeat 00:05:15.579 ************************************ 00:05:15.579 19:34:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:15.579 19:34:32 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:15.579 19:34:32 event -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:05:15.579 19:34:32 event -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:15.579 19:34:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.579 ************************************ 00:05:15.579 START TEST cpu_locks 00:05:15.579 ************************************ 00:05:15.579 19:34:32 event.cpu_locks -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:15.579 * Looking for test storage... 00:05:15.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:15.579 19:34:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:15.579 19:34:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:15.579 19:34:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:15.579 19:34:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:15.579 19:34:32 event.cpu_locks -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:05:15.579 19:34:32 event.cpu_locks -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:15.579 19:34:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.579 ************************************ 00:05:15.579 START TEST default_locks 00:05:15.579 ************************************ 00:05:15.579 19:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # default_locks 00:05:15.579 19:34:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1057221 00:05:15.579 19:34:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:15.580 19:34:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1057221 00:05:15.580 19:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # '[' -z 1057221 ']' 00:05:15.580 19:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.580 19:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:15.580 19:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.580 19:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:15.580 19:34:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.580 [2024-07-24 19:34:32.893874] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:15.580 [2024-07-24 19:34:32.893968] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057221 ] 00:05:15.580 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.580 [2024-07-24 19:34:32.950527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.837 [2024-07-24 19:34:33.058736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.094 19:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:16.094 19:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@865 -- # return 0 00:05:16.094 19:34:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1057221 00:05:16.094 19:34:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1057221 00:05:16.094 19:34:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:16.351 lslocks: write error 00:05:16.351 19:34:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1057221 00:05:16.351 19:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' -z 1057221 ']' 00:05:16.351 19:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # kill -0 1057221 00:05:16.351 19:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # uname 00:05:16.351 19:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:05:16.351 19:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1057221 00:05:16.351 19:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:05:16.351 19:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:05:16.351 19:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1057221' 00:05:16.351 killing process with pid 1057221 00:05:16.351 19:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # kill 1057221 00:05:16.351 19:34:33 event.cpu_locks.default_locks -- common/autotest_common.sh@975 -- # wait 1057221 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1057221 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # local es=0 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # valid_exec_arg waitforlisten 1057221 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@639 -- # local arg=waitforlisten 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@643 -- # type -t waitforlisten 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # waitforlisten 1057221 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # '[' -z 1057221 ']' 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 847: kill: (1057221) - No such process 00:05:16.916 ERROR: process (pid: 1057221) is no longer running 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@865 -- # return 1 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # es=1 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:16.916 00:05:16.916 real 0m1.169s 00:05:16.916 user 0m1.090s 00:05:16.916 sys 0m0.507s 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:16.916 19:34:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.916 ************************************ 00:05:16.916 END TEST default_locks 00:05:16.916 ************************************ 00:05:16.916 19:34:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:16.916 19:34:34 event.cpu_locks -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:05:16.916 19:34:34 event.cpu_locks -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:16.916 19:34:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.916 ************************************ 00:05:16.916 START TEST default_locks_via_rpc 00:05:16.916 ************************************ 00:05:16.916 19:34:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # default_locks_via_rpc 00:05:16.916 19:34:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1057383 00:05:16.916 19:34:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.916 19:34:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1057383 00:05:16.916 19:34:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # '[' -z 1057383 ']' 00:05:16.916 19:34:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.916 19:34:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:16.916 19:34:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.916 19:34:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:16.916 19:34:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.916 [2024-07-24 19:34:34.113906] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:16.916 [2024-07-24 19:34:34.114007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057383 ] 00:05:16.916 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.916 [2024-07-24 19:34:34.177170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.916 [2024-07-24 19:34:34.290566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@865 -- # return 0 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1057383 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1057383 00:05:17.848 19:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:18.105 19:34:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1057383 00:05:18.105 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' -z 1057383 ']' 00:05:18.105 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # kill -0 1057383 00:05:18.106 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # uname 00:05:18.106 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:05:18.106 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1057383 00:05:18.106 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:05:18.106 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:05:18.106 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1057383' 00:05:18.106 killing process with pid 1057383 00:05:18.106 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # kill 1057383 00:05:18.106 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@975 -- # wait 1057383 00:05:18.669 00:05:18.669 real 0m1.841s 00:05:18.669 user 0m1.974s 00:05:18.669 sys 0m0.560s 00:05:18.669 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:18.670 19:34:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.670 ************************************ 00:05:18.670 END TEST default_locks_via_rpc 00:05:18.670 ************************************ 00:05:18.670 19:34:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:18.670 19:34:35 event.cpu_locks -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:05:18.670 19:34:35 event.cpu_locks -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:18.670 19:34:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.670 ************************************ 00:05:18.670 START TEST non_locking_app_on_locked_coremask 00:05:18.670 ************************************ 00:05:18.670 19:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # non_locking_app_on_locked_coremask 00:05:18.670 19:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1057563 00:05:18.670 19:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.670 19:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1057563 /var/tmp/spdk.sock 00:05:18.670 19:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # '[' -z 1057563 ']' 00:05:18.670 19:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.670 19:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:18.670 19:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.670 19:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:18.670 19:34:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.670 [2024-07-24 19:34:35.996051] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:18.670 [2024-07-24 19:34:35.996137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057563 ] 00:05:18.670 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.927 [2024-07-24 19:34:36.056510] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.927 [2024-07-24 19:34:36.164867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.185 19:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:19.185 19:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@865 -- # return 0 00:05:19.185 19:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1057687 00:05:19.185 19:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1057687 /var/tmp/spdk2.sock 00:05:19.185 19:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # '[' -z 1057687 ']' 00:05:19.185 19:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.185 19:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:19.185 19:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:19.185 19:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.185 19:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:19.185 19:34:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.185 [2024-07-24 19:34:36.482500] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:19.185 [2024-07-24 19:34:36.482598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057687 ] 00:05:19.185 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.443 [2024-07-24 19:34:36.578259] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:19.443 [2024-07-24 19:34:36.578297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.443 [2024-07-24 19:34:36.810810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.376 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:20.376 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@865 -- # return 0 00:05:20.376 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1057563 00:05:20.376 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1057563 00:05:20.376 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.634 lslocks: write error 00:05:20.634 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1057563 00:05:20.634 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' -z 1057563 ']' 00:05:20.634 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # kill -0 1057563 00:05:20.634 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # uname 00:05:20.634 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:05:20.634 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1057563 00:05:20.634 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:05:20.634 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:05:20.634 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1057563' 00:05:20.634 killing process with pid 1057563 00:05:20.634 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # kill 1057563 00:05:20.634 19:34:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@975 -- # wait 1057563 00:05:21.568 19:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1057687 00:05:21.568 19:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' -z 1057687 ']' 00:05:21.568 19:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # kill -0 1057687 00:05:21.568 19:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # uname 00:05:21.568 19:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:05:21.568 19:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1057687 00:05:21.568 19:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:05:21.568 19:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:05:21.568 19:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1057687' 00:05:21.568 killing process with pid 1057687 00:05:21.568 19:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # kill 1057687 00:05:21.568 19:34:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@975 -- # wait 1057687 00:05:22.133 00:05:22.133 real 0m3.300s 00:05:22.133 user 0m3.429s 00:05:22.133 sys 0m1.046s 00:05:22.133 19:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:22.133 19:34:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.133 ************************************ 00:05:22.133 END TEST non_locking_app_on_locked_coremask 00:05:22.133 ************************************ 00:05:22.133 19:34:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:22.133 19:34:39 event.cpu_locks -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:05:22.134 19:34:39 event.cpu_locks -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:22.134 19:34:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.134 ************************************ 00:05:22.134 START TEST locking_app_on_unlocked_coremask 00:05:22.134 ************************************ 00:05:22.134 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # locking_app_on_unlocked_coremask 00:05:22.134 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1057997 00:05:22.134 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:22.134 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1057997 /var/tmp/spdk.sock 00:05:22.134 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # '[' -z 1057997 ']' 00:05:22.134 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.134 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:22.134 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.134 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:22.134 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.134 [2024-07-24 19:34:39.340205] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:22.134 [2024-07-24 19:34:39.340323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1057997 ] 00:05:22.134 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.134 [2024-07-24 19:34:39.402417] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.134 [2024-07-24 19:34:39.402456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.392 [2024-07-24 19:34:39.518489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.650 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:22.650 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@865 -- # return 0 00:05:22.650 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1058121 00:05:22.650 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1058121 /var/tmp/spdk2.sock 00:05:22.650 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:22.650 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # '[' -z 1058121 ']' 00:05:22.650 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.650 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:22.650 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.650 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:22.650 19:34:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.650 [2024-07-24 19:34:39.839921] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:22.650 [2024-07-24 19:34:39.840021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058121 ] 00:05:22.650 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.650 [2024-07-24 19:34:39.934853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.908 [2024-07-24 19:34:40.178240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.474 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:23.474 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@865 -- # return 0 00:05:23.474 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1058121 00:05:23.474 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1058121 00:05:23.474 19:34:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.040 lslocks: write error 00:05:24.040 19:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1057997 00:05:24.040 19:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' -z 1057997 ']' 00:05:24.040 19:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # kill -0 1057997 00:05:24.040 19:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # uname 00:05:24.040 19:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:05:24.040 19:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1057997 00:05:24.040 19:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:05:24.040 19:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:05:24.040 19:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1057997' 00:05:24.040 killing process with pid 1057997 00:05:24.040 19:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # kill 1057997 00:05:24.040 19:34:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@975 -- # wait 1057997 00:05:24.972 19:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1058121 00:05:24.972 19:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' -z 1058121 ']' 00:05:24.972 19:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # kill -0 1058121 00:05:24.972 19:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # uname 00:05:24.972 19:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:05:24.972 19:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1058121 00:05:24.972 19:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:05:24.972 19:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:05:24.972 19:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1058121' 00:05:24.972 killing process with pid 1058121 00:05:24.972 19:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # kill 1058121 00:05:24.972 19:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@975 -- # wait 1058121 00:05:25.229 00:05:25.229 real 0m3.279s 00:05:25.229 user 0m3.419s 00:05:25.229 sys 0m1.032s 00:05:25.229 19:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:25.229 19:34:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.229 ************************************ 00:05:25.229 END TEST locking_app_on_unlocked_coremask 00:05:25.229 ************************************ 00:05:25.229 19:34:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:25.229 19:34:42 event.cpu_locks -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:05:25.229 19:34:42 event.cpu_locks -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:25.229 19:34:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.487 ************************************ 00:05:25.487 START TEST locking_app_on_locked_coremask 00:05:25.487 ************************************ 00:05:25.487 19:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # locking_app_on_locked_coremask 00:05:25.487 19:34:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1058434 00:05:25.487 19:34:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.487 19:34:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1058434 /var/tmp/spdk.sock 00:05:25.487 19:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # '[' -z 1058434 ']' 00:05:25.487 19:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.487 19:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:25.487 19:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.487 19:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:25.487 19:34:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.487 [2024-07-24 19:34:42.665922] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:25.487 [2024-07-24 19:34:42.666003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058434 ] 00:05:25.487 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.487 [2024-07-24 19:34:42.722669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.487 [2024-07-24 19:34:42.833119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@865 -- # return 0 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1058555 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1058555 /var/tmp/spdk2.sock 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # local es=0 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # valid_exec_arg waitforlisten 1058555 /var/tmp/spdk2.sock 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@639 -- # local arg=waitforlisten 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@643 -- # type -t waitforlisten 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # waitforlisten 1058555 /var/tmp/spdk2.sock 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # '[' -z 1058555 ']' 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:25.746 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.004 [2024-07-24 19:34:43.145802] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:26.004 [2024-07-24 19:34:43.145889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058555 ] 00:05:26.004 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.004 [2024-07-24 19:34:43.242048] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1058434 has claimed it. 00:05:26.004 [2024-07-24 19:34:43.242107] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:26.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 847: kill: (1058555) - No such process 00:05:26.567 ERROR: process (pid: 1058555) is no longer running 00:05:26.567 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:26.567 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@865 -- # return 1 00:05:26.567 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # es=1 00:05:26.567 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:05:26.567 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:05:26.567 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:05:26.567 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1058434 00:05:26.567 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1058434 00:05:26.567 19:34:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.132 lslocks: write error 00:05:27.132 19:34:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1058434 00:05:27.132 19:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' -z 1058434 ']' 00:05:27.132 19:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # kill -0 1058434 00:05:27.132 19:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # uname 00:05:27.132 19:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:05:27.132 19:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1058434 00:05:27.132 19:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:05:27.132 19:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:05:27.132 19:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1058434' 00:05:27.132 killing process with pid 1058434 00:05:27.132 19:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # kill 1058434 00:05:27.132 19:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@975 -- # wait 1058434 00:05:27.391 00:05:27.391 real 0m2.095s 00:05:27.391 user 0m2.249s 00:05:27.391 sys 0m0.665s 00:05:27.391 19:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:27.391 19:34:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.391 ************************************ 00:05:27.391 END TEST locking_app_on_locked_coremask 00:05:27.391 ************************************ 00:05:27.391 19:34:44 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:27.391 19:34:44 event.cpu_locks -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:05:27.391 19:34:44 event.cpu_locks -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:27.391 19:34:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.391 ************************************ 00:05:27.391 START TEST locking_overlapped_coremask 00:05:27.391 ************************************ 00:05:27.391 19:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # locking_overlapped_coremask 00:05:27.391 19:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1058730 00:05:27.391 19:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:27.391 19:34:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1058730 /var/tmp/spdk.sock 00:05:27.391 19:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # '[' -z 1058730 ']' 00:05:27.391 19:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.391 19:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:27.391 19:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.391 19:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:27.391 19:34:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.649 [2024-07-24 19:34:44.815508] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:27.649 [2024-07-24 19:34:44.815612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058730 ] 00:05:27.649 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.649 [2024-07-24 19:34:44.879035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.649 [2024-07-24 19:34:44.995655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.649 [2024-07-24 19:34:44.995706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.649 [2024-07-24 19:34:44.995725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.582 19:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:28.582 19:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@865 -- # return 0 00:05:28.582 19:34:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1058867 00:05:28.582 19:34:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:28.582 19:34:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1058867 /var/tmp/spdk2.sock 00:05:28.582 19:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # local es=0 00:05:28.582 19:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # valid_exec_arg waitforlisten 1058867 /var/tmp/spdk2.sock 00:05:28.582 19:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@639 -- # local arg=waitforlisten 00:05:28.582 19:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:28.582 19:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@643 -- # type -t waitforlisten 00:05:28.582 19:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:28.582 19:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # waitforlisten 1058867 /var/tmp/spdk2.sock 00:05:28.582 19:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # '[' -z 1058867 ']' 00:05:28.582 19:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:28.582 19:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:28.583 19:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:28.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:28.583 19:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:28.583 19:34:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.583 [2024-07-24 19:34:45.802815] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:28.583 [2024-07-24 19:34:45.802915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1058867 ] 00:05:28.583 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.583 [2024-07-24 19:34:45.890722] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1058730 has claimed it. 00:05:28.583 [2024-07-24 19:34:45.890786] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:29.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 847: kill: (1058867) - No such process 00:05:29.148 ERROR: process (pid: 1058867) is no longer running 00:05:29.148 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:29.148 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@865 -- # return 1 00:05:29.148 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # es=1 00:05:29.148 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:05:29.148 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:05:29.148 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:05:29.148 19:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:29.148 19:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:29.148 19:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:29.149 19:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:29.149 19:34:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1058730 00:05:29.149 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' -z 1058730 ']' 00:05:29.149 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # kill -0 1058730 00:05:29.149 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # uname 00:05:29.149 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:05:29.149 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1058730 00:05:29.149 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:05:29.149 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:05:29.149 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1058730' 00:05:29.149 killing process with pid 1058730 00:05:29.149 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # kill 1058730 00:05:29.149 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@975 -- # wait 1058730 00:05:29.714 00:05:29.714 real 0m2.217s 00:05:29.714 user 0m6.219s 00:05:29.714 sys 0m0.467s 00:05:29.714 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:29.714 19:34:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.714 ************************************ 00:05:29.714 END TEST locking_overlapped_coremask 00:05:29.714 ************************************ 00:05:29.714 19:34:47 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:29.714 19:34:47 event.cpu_locks -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:05:29.714 19:34:47 event.cpu_locks -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:29.714 19:34:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:29.714 ************************************ 00:05:29.714 START TEST locking_overlapped_coremask_via_rpc 00:05:29.714 ************************************ 00:05:29.714 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # locking_overlapped_coremask_via_rpc 00:05:29.714 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1059030 00:05:29.714 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:29.714 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1059030 /var/tmp/spdk.sock 00:05:29.714 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # '[' -z 1059030 ']' 00:05:29.714 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.714 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:29.714 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.714 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:29.714 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.714 [2024-07-24 19:34:47.079062] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:29.714 [2024-07-24 19:34:47.079153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059030 ] 00:05:29.972 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.972 [2024-07-24 19:34:47.137286] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.972 [2024-07-24 19:34:47.137339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.972 [2024-07-24 19:34:47.247754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.972 [2024-07-24 19:34:47.247821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.972 [2024-07-24 19:34:47.247824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.230 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:30.230 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@865 -- # return 0 00:05:30.230 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1059158 00:05:30.230 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1059158 /var/tmp/spdk2.sock 00:05:30.230 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # '[' -z 1059158 ']' 00:05:30.230 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:30.230 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.230 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:30.230 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.230 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:30.230 19:34:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.230 [2024-07-24 19:34:47.548731] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:30.230 [2024-07-24 19:34:47.548829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059158 ] 00:05:30.230 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.488 [2024-07-24 19:34:47.635047] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.488 [2024-07-24 19:34:47.635080] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.488 [2024-07-24 19:34:47.858651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:30.488 [2024-07-24 19:34:47.858714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:30.488 [2024-07-24 19:34:47.862264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.454 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@865 -- # return 0 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # local es=0 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@639 -- # local arg=rpc_cmd 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@643 -- # type -t rpc_cmd 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.455 [2024-07-24 19:34:48.500337] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1059030 has claimed it. 00:05:31.455 request: 00:05:31.455 { 00:05:31.455 "method": "framework_enable_cpumask_locks", 00:05:31.455 "req_id": 1 00:05:31.455 } 00:05:31.455 Got JSON-RPC error response 00:05:31.455 response: 00:05:31.455 { 00:05:31.455 "code": -32603, 00:05:31.455 "message": "Failed to claim CPU core: 2" 00:05:31.455 } 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # es=1 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1059030 /var/tmp/spdk.sock 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # '[' -z 1059030 ']' 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@865 -- # return 0 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1059158 /var/tmp/spdk2.sock 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # '[' -z 1059158 ']' 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:31.455 19:34:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.713 19:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:31.713 19:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@865 -- # return 0 00:05:31.713 19:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:31.713 19:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:31.713 19:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:31.713 19:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:31.713 00:05:31.713 real 0m1.981s 00:05:31.713 user 0m1.024s 00:05:31.713 sys 0m0.179s 00:05:31.713 19:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:31.713 19:34:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.713 ************************************ 00:05:31.713 END TEST locking_overlapped_coremask_via_rpc 00:05:31.713 ************************************ 00:05:31.713 19:34:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:31.713 19:34:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1059030 ]] 00:05:31.713 19:34:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1059030 00:05:31.713 19:34:49 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' -z 1059030 ']' 00:05:31.713 19:34:49 event.cpu_locks -- common/autotest_common.sh@955 -- # kill -0 1059030 00:05:31.713 19:34:49 event.cpu_locks -- common/autotest_common.sh@956 -- # uname 00:05:31.713 19:34:49 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:05:31.713 19:34:49 event.cpu_locks -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1059030 00:05:31.713 19:34:49 event.cpu_locks -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:05:31.713 19:34:49 event.cpu_locks -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:05:31.713 19:34:49 event.cpu_locks -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1059030' 00:05:31.713 killing process with pid 1059030 00:05:31.713 19:34:49 event.cpu_locks -- common/autotest_common.sh@970 -- # kill 1059030 00:05:31.713 19:34:49 event.cpu_locks -- common/autotest_common.sh@975 -- # wait 1059030 00:05:32.278 19:34:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1059158 ]] 00:05:32.278 19:34:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1059158 00:05:32.278 19:34:49 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' -z 1059158 ']' 00:05:32.278 19:34:49 event.cpu_locks -- common/autotest_common.sh@955 -- # kill -0 1059158 00:05:32.278 19:34:49 event.cpu_locks -- common/autotest_common.sh@956 -- # uname 00:05:32.278 19:34:49 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:05:32.278 19:34:49 event.cpu_locks -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1059158 00:05:32.278 19:34:49 event.cpu_locks -- common/autotest_common.sh@957 -- # process_name=reactor_2 00:05:32.278 19:34:49 event.cpu_locks -- common/autotest_common.sh@961 -- # '[' reactor_2 = sudo ']' 00:05:32.278 19:34:49 event.cpu_locks -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1059158' 00:05:32.278 killing process with pid 1059158 00:05:32.278 19:34:49 event.cpu_locks -- common/autotest_common.sh@970 -- # kill 1059158 00:05:32.278 19:34:49 event.cpu_locks -- common/autotest_common.sh@975 -- # wait 1059158 00:05:32.844 19:34:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:32.844 19:34:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:32.844 19:34:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1059030 ]] 00:05:32.844 19:34:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1059030 00:05:32.844 19:34:49 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' -z 1059030 ']' 00:05:32.844 19:34:49 event.cpu_locks -- common/autotest_common.sh@955 -- # kill -0 1059030 00:05:32.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 955: kill: (1059030) - No such process 00:05:32.844 19:34:49 event.cpu_locks -- common/autotest_common.sh@978 -- # echo 'Process with pid 1059030 is not found' 00:05:32.844 Process with pid 1059030 is not found 00:05:32.844 19:34:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1059158 ]] 00:05:32.844 19:34:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1059158 00:05:32.844 19:34:49 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' -z 1059158 ']' 00:05:32.844 19:34:49 event.cpu_locks -- common/autotest_common.sh@955 -- # kill -0 1059158 00:05:32.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 955: kill: (1059158) - No such process 00:05:32.844 19:34:49 event.cpu_locks -- common/autotest_common.sh@978 -- # echo 'Process with pid 1059158 is not found' 00:05:32.844 Process with pid 1059158 is not found 00:05:32.844 19:34:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:32.844 00:05:32.844 real 0m17.221s 00:05:32.844 user 0m30.320s 00:05:32.844 sys 0m5.329s 00:05:32.844 19:34:49 event.cpu_locks -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:32.844 19:34:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.844 ************************************ 00:05:32.844 END TEST cpu_locks 00:05:32.844 ************************************ 00:05:32.844 00:05:32.844 real 0m41.283s 00:05:32.844 user 1m18.302s 00:05:32.844 sys 0m9.316s 00:05:32.844 19:34:50 event -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:32.844 19:34:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.844 ************************************ 00:05:32.844 END TEST event 00:05:32.844 ************************************ 00:05:32.844 19:34:50 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:32.844 19:34:50 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:05:32.844 19:34:50 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:32.844 19:34:50 -- common/autotest_common.sh@10 -- # set +x 00:05:32.844 ************************************ 00:05:32.844 START TEST thread 00:05:32.844 ************************************ 00:05:32.844 19:34:50 thread -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:32.844 * Looking for test storage... 00:05:32.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:32.844 19:34:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:32.844 19:34:50 thread -- common/autotest_common.sh@1102 -- # '[' 8 -le 1 ']' 00:05:32.844 19:34:50 thread -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:32.844 19:34:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.844 ************************************ 00:05:32.844 START TEST thread_poller_perf 00:05:32.844 ************************************ 00:05:32.844 19:34:50 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:32.844 [2024-07-24 19:34:50.143212] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:32.844 [2024-07-24 19:34:50.143280] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059527 ] 00:05:32.844 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.844 [2024-07-24 19:34:50.201368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.102 [2024-07-24 19:34:50.315933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.102 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:34.475 ====================================== 00:05:34.475 busy:2707936642 (cyc) 00:05:34.475 total_run_count: 292000 00:05:34.475 tsc_hz: 2700000000 (cyc) 00:05:34.475 ====================================== 00:05:34.475 poller_cost: 9273 (cyc), 3434 (nsec) 00:05:34.475 00:05:34.475 real 0m1.308s 00:05:34.475 user 0m1.216s 00:05:34.475 sys 0m0.086s 00:05:34.475 19:34:51 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:34.475 19:34:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.475 ************************************ 00:05:34.475 END TEST thread_poller_perf 00:05:34.475 ************************************ 00:05:34.475 19:34:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:34.475 19:34:51 thread -- common/autotest_common.sh@1102 -- # '[' 8 -le 1 ']' 00:05:34.475 19:34:51 thread -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:34.475 19:34:51 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.475 ************************************ 00:05:34.475 START TEST thread_poller_perf 00:05:34.475 ************************************ 00:05:34.475 19:34:51 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:34.475 [2024-07-24 19:34:51.495456] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:34.475 [2024-07-24 19:34:51.495517] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059686 ] 00:05:34.475 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.475 [2024-07-24 19:34:51.558008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.475 [2024-07-24 19:34:51.676157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.475 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:35.848 ====================================== 00:05:35.848 busy:2703029858 (cyc) 00:05:35.848 total_run_count: 3918000 00:05:35.848 tsc_hz: 2700000000 (cyc) 00:05:35.848 ====================================== 00:05:35.848 poller_cost: 689 (cyc), 255 (nsec) 00:05:35.848 00:05:35.848 real 0m1.319s 00:05:35.848 user 0m1.224s 00:05:35.848 sys 0m0.090s 00:05:35.848 19:34:52 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:35.848 19:34:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.848 ************************************ 00:05:35.848 END TEST thread_poller_perf 00:05:35.848 ************************************ 00:05:35.848 19:34:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:35.848 00:05:35.848 real 0m2.766s 00:05:35.848 user 0m2.491s 00:05:35.848 sys 0m0.275s 00:05:35.848 19:34:52 thread -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:35.848 19:34:52 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.848 ************************************ 00:05:35.848 END TEST thread 00:05:35.848 ************************************ 00:05:35.848 19:34:52 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:35.848 19:34:52 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:05:35.848 19:34:52 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:35.848 19:34:52 -- common/autotest_common.sh@10 -- # set +x 00:05:35.848 ************************************ 00:05:35.848 START TEST accel 00:05:35.848 ************************************ 00:05:35.848 19:34:52 accel -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:35.848 * Looking for test storage... 00:05:35.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:35.848 19:34:52 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:35.848 19:34:52 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:35.848 19:34:52 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.848 19:34:52 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1059885 00:05:35.848 19:34:52 accel -- accel/accel.sh@63 -- # waitforlisten 1059885 00:05:35.848 19:34:52 accel -- common/autotest_common.sh@832 -- # '[' -z 1059885 ']' 00:05:35.848 19:34:52 accel -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.848 19:34:52 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:35.848 19:34:52 accel -- common/autotest_common.sh@837 -- # local max_retries=100 00:05:35.848 19:34:52 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:35.848 19:34:52 accel -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.848 19:34:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.848 19:34:52 accel -- common/autotest_common.sh@841 -- # xtrace_disable 00:05:35.848 19:34:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.848 19:34:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.848 19:34:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.848 19:34:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.848 19:34:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.848 19:34:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:35.848 19:34:52 accel -- accel/accel.sh@41 -- # jq -r . 00:05:35.848 [2024-07-24 19:34:52.973348] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:35.848 [2024-07-24 19:34:52.973429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059885 ] 00:05:35.848 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.848 [2024-07-24 19:34:53.038870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.848 [2024-07-24 19:34:53.147962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.122 19:34:53 accel -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:05:36.122 19:34:53 accel -- common/autotest_common.sh@865 -- # return 0 00:05:36.122 19:34:53 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:36.122 19:34:53 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:36.122 19:34:53 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:36.122 19:34:53 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:36.122 19:34:53 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:36.122 19:34:53 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:36.122 19:34:53 accel -- common/autotest_common.sh@562 -- # xtrace_disable 00:05:36.122 19:34:53 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:36.122 19:34:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.122 19:34:53 accel -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:05:36.122 19:34:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.122 19:34:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.122 19:34:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.122 19:34:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.122 19:34:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.122 19:34:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.122 19:34:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.122 19:34:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.122 19:34:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.122 19:34:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.122 19:34:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.122 19:34:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.122 19:34:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.122 19:34:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.122 19:34:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.122 19:34:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.122 19:34:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.122 19:34:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.122 19:34:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.122 19:34:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.122 19:34:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.122 19:34:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.122 19:34:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.122 19:34:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.122 19:34:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.122 19:34:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.122 19:34:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.122 19:34:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.122 19:34:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:36.122 19:34:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:36.122 19:34:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:36.122 19:34:53 accel -- accel/accel.sh@75 -- # killprocess 1059885 00:05:36.122 19:34:53 accel -- common/autotest_common.sh@951 -- # '[' -z 1059885 ']' 00:05:36.122 19:34:53 accel -- common/autotest_common.sh@955 -- # kill -0 1059885 00:05:36.122 19:34:53 accel -- common/autotest_common.sh@956 -- # uname 00:05:36.122 19:34:53 accel -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:05:36.122 19:34:53 accel -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1059885 00:05:36.123 19:34:53 accel -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:05:36.123 19:34:53 accel -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:05:36.123 19:34:53 accel -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1059885' 00:05:36.123 killing process with pid 1059885 00:05:36.123 19:34:53 accel -- common/autotest_common.sh@970 -- # kill 1059885 00:05:36.123 19:34:53 accel -- common/autotest_common.sh@975 -- # wait 1059885 00:05:36.697 19:34:53 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:36.697 19:34:53 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:36.697 19:34:53 accel -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:05:36.697 19:34:53 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:36.697 19:34:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.697 19:34:53 accel.accel_help -- common/autotest_common.sh@1126 -- # accel_perf -h 00:05:36.697 19:34:53 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:36.697 19:34:53 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:36.697 19:34:53 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.697 19:34:53 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.697 19:34:53 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.697 19:34:53 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.697 19:34:53 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.697 19:34:53 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:36.697 19:34:53 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:36.697 19:34:54 accel.accel_help -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:36.697 19:34:54 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:36.697 19:34:54 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:36.697 19:34:54 accel -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:05:36.697 19:34:54 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:36.697 19:34:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.697 ************************************ 00:05:36.697 START TEST accel_missing_filename 00:05:36.697 ************************************ 00:05:36.697 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@1126 -- # NOT accel_perf -t 1 -w compress 00:05:36.697 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # local es=0 00:05:36.697 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:36.697 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@639 -- # local arg=accel_perf 00:05:36.697 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:36.697 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@643 -- # type -t accel_perf 00:05:36.697 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:36.697 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@654 -- # accel_perf -t 1 -w compress 00:05:36.697 19:34:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:36.697 19:34:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:36.697 19:34:54 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.697 19:34:54 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.697 19:34:54 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.697 19:34:54 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.697 19:34:54 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.697 19:34:54 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:36.697 19:34:54 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:36.697 [2024-07-24 19:34:54.072583] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:36.697 [2024-07-24 19:34:54.072637] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060055 ] 00:05:36.955 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.955 [2024-07-24 19:34:54.135093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.955 [2024-07-24 19:34:54.253235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.955 [2024-07-24 19:34:54.314985] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:37.213 [2024-07-24 19:34:54.403600] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:37.213 A filename is required. 00:05:37.213 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@654 -- # es=234 00:05:37.213 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:05:37.213 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@663 -- # es=106 00:05:37.213 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@664 -- # case "$es" in 00:05:37.213 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@671 -- # es=1 00:05:37.213 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:05:37.213 00:05:37.213 real 0m0.473s 00:05:37.213 user 0m0.365s 00:05:37.213 sys 0m0.138s 00:05:37.213 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:37.213 19:34:54 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:37.213 ************************************ 00:05:37.213 END TEST accel_missing_filename 00:05:37.213 ************************************ 00:05:37.213 19:34:54 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:37.213 19:34:54 accel -- common/autotest_common.sh@1102 -- # '[' 10 -le 1 ']' 00:05:37.213 19:34:54 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:37.213 19:34:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.213 ************************************ 00:05:37.213 START TEST accel_compress_verify 00:05:37.213 ************************************ 00:05:37.213 19:34:54 accel.accel_compress_verify -- common/autotest_common.sh@1126 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:37.213 19:34:54 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # local es=0 00:05:37.213 19:34:54 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:37.213 19:34:54 accel.accel_compress_verify -- common/autotest_common.sh@639 -- # local arg=accel_perf 00:05:37.213 19:34:54 accel.accel_compress_verify -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:37.213 19:34:54 accel.accel_compress_verify -- common/autotest_common.sh@643 -- # type -t accel_perf 00:05:37.213 19:34:54 accel.accel_compress_verify -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:37.213 19:34:54 accel.accel_compress_verify -- common/autotest_common.sh@654 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:37.213 19:34:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:37.213 19:34:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:37.213 19:34:54 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.213 19:34:54 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.213 19:34:54 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.213 19:34:54 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.213 19:34:54 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.213 19:34:54 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:37.213 19:34:54 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:37.471 [2024-07-24 19:34:54.595912] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:37.471 [2024-07-24 19:34:54.595977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060194 ] 00:05:37.471 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.471 [2024-07-24 19:34:54.658714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.471 [2024-07-24 19:34:54.778411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.471 [2024-07-24 19:34:54.839179] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:37.729 [2024-07-24 19:34:54.915746] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:37.729 00:05:37.729 Compression does not support the verify option, aborting. 00:05:37.729 19:34:55 accel.accel_compress_verify -- common/autotest_common.sh@654 -- # es=161 00:05:37.729 19:34:55 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:05:37.729 19:34:55 accel.accel_compress_verify -- common/autotest_common.sh@663 -- # es=33 00:05:37.729 19:34:55 accel.accel_compress_verify -- common/autotest_common.sh@664 -- # case "$es" in 00:05:37.729 19:34:55 accel.accel_compress_verify -- common/autotest_common.sh@671 -- # es=1 00:05:37.729 19:34:55 accel.accel_compress_verify -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:05:37.729 00:05:37.729 real 0m0.465s 00:05:37.729 user 0m0.345s 00:05:37.729 sys 0m0.154s 00:05:37.729 19:34:55 accel.accel_compress_verify -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:37.729 19:34:55 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:37.729 ************************************ 00:05:37.729 END TEST accel_compress_verify 00:05:37.729 ************************************ 00:05:37.729 19:34:55 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:37.729 19:34:55 accel -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:05:37.729 19:34:55 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:37.729 19:34:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.729 ************************************ 00:05:37.729 START TEST accel_wrong_workload 00:05:37.729 ************************************ 00:05:37.729 19:34:55 accel.accel_wrong_workload -- common/autotest_common.sh@1126 -- # NOT accel_perf -t 1 -w foobar 00:05:37.729 19:34:55 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # local es=0 00:05:37.729 19:34:55 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:37.729 19:34:55 accel.accel_wrong_workload -- common/autotest_common.sh@639 -- # local arg=accel_perf 00:05:37.729 19:34:55 accel.accel_wrong_workload -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:37.729 19:34:55 accel.accel_wrong_workload -- common/autotest_common.sh@643 -- # type -t accel_perf 00:05:37.729 19:34:55 accel.accel_wrong_workload -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:37.729 19:34:55 accel.accel_wrong_workload -- common/autotest_common.sh@654 -- # accel_perf -t 1 -w foobar 00:05:37.729 19:34:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:37.729 19:34:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:37.729 19:34:55 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.729 19:34:55 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.729 19:34:55 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.729 19:34:55 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.729 19:34:55 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.729 19:34:55 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:37.729 19:34:55 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:37.729 Unsupported workload type: foobar 00:05:37.729 [2024-07-24 19:34:55.103231] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:37.729 accel_perf options: 00:05:37.729 [-h help message] 00:05:37.729 [-q queue depth per core] 00:05:37.729 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:37.729 [-T number of threads per core 00:05:37.729 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:37.729 [-t time in seconds] 00:05:37.729 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:37.729 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:37.729 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:37.729 [-l for compress/decompress workloads, name of uncompressed input file 00:05:37.729 [-S for crc32c workload, use this seed value (default 0) 00:05:37.729 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:37.729 [-f for fill workload, use this BYTE value (default 255) 00:05:37.729 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:37.729 [-y verify result if this switch is on] 00:05:37.729 [-a tasks to allocate per core (default: same value as -q)] 00:05:37.729 Can be used to spread operations across a wider range of memory. 00:05:37.729 19:34:55 accel.accel_wrong_workload -- common/autotest_common.sh@654 -- # es=1 00:05:37.729 19:34:55 accel.accel_wrong_workload -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:05:37.729 19:34:55 accel.accel_wrong_workload -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:05:37.729 19:34:55 accel.accel_wrong_workload -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:05:37.729 00:05:37.729 real 0m0.023s 00:05:37.729 user 0m0.014s 00:05:37.729 sys 0m0.009s 00:05:37.729 19:34:55 accel.accel_wrong_workload -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:37.729 19:34:55 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:37.729 ************************************ 00:05:37.729 END TEST accel_wrong_workload 00:05:37.729 ************************************ 00:05:37.988 Error: writing output failed: Broken pipe 00:05:37.988 19:34:55 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:37.988 19:34:55 accel -- common/autotest_common.sh@1102 -- # '[' 10 -le 1 ']' 00:05:37.988 19:34:55 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:37.988 19:34:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.988 ************************************ 00:05:37.988 START TEST accel_negative_buffers 00:05:37.988 ************************************ 00:05:37.988 19:34:55 accel.accel_negative_buffers -- common/autotest_common.sh@1126 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:37.988 19:34:55 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # local es=0 00:05:37.988 19:34:55 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:37.988 19:34:55 accel.accel_negative_buffers -- common/autotest_common.sh@639 -- # local arg=accel_perf 00:05:37.988 19:34:55 accel.accel_negative_buffers -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:37.988 19:34:55 accel.accel_negative_buffers -- common/autotest_common.sh@643 -- # type -t accel_perf 00:05:37.988 19:34:55 accel.accel_negative_buffers -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:05:37.988 19:34:55 accel.accel_negative_buffers -- common/autotest_common.sh@654 -- # accel_perf -t 1 -w xor -y -x -1 00:05:37.988 19:34:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:37.988 19:34:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:37.988 19:34:55 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.988 19:34:55 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.988 19:34:55 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.988 19:34:55 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.988 19:34:55 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.988 19:34:55 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:37.988 19:34:55 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:37.988 -x option must be non-negative. 00:05:37.988 [2024-07-24 19:34:55.176536] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:37.988 accel_perf options: 00:05:37.988 [-h help message] 00:05:37.988 [-q queue depth per core] 00:05:37.988 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:37.988 [-T number of threads per core 00:05:37.988 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:37.988 [-t time in seconds] 00:05:37.988 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:37.988 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:37.988 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:37.988 [-l for compress/decompress workloads, name of uncompressed input file 00:05:37.988 [-S for crc32c workload, use this seed value (default 0) 00:05:37.988 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:37.988 [-f for fill workload, use this BYTE value (default 255) 00:05:37.988 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:37.988 [-y verify result if this switch is on] 00:05:37.988 [-a tasks to allocate per core (default: same value as -q)] 00:05:37.988 Can be used to spread operations across a wider range of memory. 00:05:37.988 19:34:55 accel.accel_negative_buffers -- common/autotest_common.sh@654 -- # es=1 00:05:37.988 19:34:55 accel.accel_negative_buffers -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:05:37.988 19:34:55 accel.accel_negative_buffers -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:05:37.988 19:34:55 accel.accel_negative_buffers -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:05:37.988 00:05:37.988 real 0m0.024s 00:05:37.988 user 0m0.017s 00:05:37.988 sys 0m0.007s 00:05:37.988 19:34:55 accel.accel_negative_buffers -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:37.988 19:34:55 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:37.988 ************************************ 00:05:37.988 END TEST accel_negative_buffers 00:05:37.988 ************************************ 00:05:37.988 Error: writing output failed: Broken pipe 00:05:37.988 19:34:55 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:37.988 19:34:55 accel -- common/autotest_common.sh@1102 -- # '[' 9 -le 1 ']' 00:05:37.988 19:34:55 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:37.988 19:34:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.988 ************************************ 00:05:37.988 START TEST accel_crc32c 00:05:37.988 ************************************ 00:05:37.988 19:34:55 accel.accel_crc32c -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:37.988 19:34:55 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:37.988 19:34:55 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:37.988 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:37.988 19:34:55 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:37.988 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:37.988 19:34:55 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:37.988 19:34:55 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:37.988 19:34:55 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.988 19:34:55 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.988 19:34:55 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.988 19:34:55 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.988 19:34:55 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.988 19:34:55 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:37.988 19:34:55 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:37.988 [2024-07-24 19:34:55.235791] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:37.988 [2024-07-24 19:34:55.235861] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060278 ] 00:05:37.988 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.988 [2024-07-24 19:34:55.299920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.246 [2024-07-24 19:34:55.418815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.246 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:38.247 19:34:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:39.619 19:34:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.619 00:05:39.619 real 0m1.473s 00:05:39.619 user 0m1.333s 00:05:39.619 sys 0m0.143s 00:05:39.619 19:34:56 accel.accel_crc32c -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:39.619 19:34:56 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:39.619 ************************************ 00:05:39.619 END TEST accel_crc32c 00:05:39.619 ************************************ 00:05:39.619 19:34:56 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:39.619 19:34:56 accel -- common/autotest_common.sh@1102 -- # '[' 9 -le 1 ']' 00:05:39.619 19:34:56 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:39.619 19:34:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.620 ************************************ 00:05:39.620 START TEST accel_crc32c_C2 00:05:39.620 ************************************ 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:39.620 [2024-07-24 19:34:56.754491] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:39.620 [2024-07-24 19:34:56.754566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060543 ] 00:05:39.620 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.620 [2024-07-24 19:34:56.818260] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.620 [2024-07-24 19:34:56.936862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.620 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.877 19:34:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.877 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:39.877 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.877 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.877 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.877 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.877 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.877 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.877 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.877 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.877 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.877 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.877 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.877 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:39.878 19:34:57 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.250 00:05:41.250 real 0m1.476s 00:05:41.250 user 0m1.336s 00:05:41.250 sys 0m0.143s 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:41.250 19:34:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:41.250 ************************************ 00:05:41.250 END TEST accel_crc32c_C2 00:05:41.250 ************************************ 00:05:41.250 19:34:58 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:41.250 19:34:58 accel -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:05:41.250 19:34:58 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:41.250 19:34:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.250 ************************************ 00:05:41.250 START TEST accel_copy 00:05:41.250 ************************************ 00:05:41.250 19:34:58 accel.accel_copy -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w copy -y 00:05:41.250 19:34:58 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:41.251 [2024-07-24 19:34:58.270458] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:41.251 [2024-07-24 19:34:58.270516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060698 ] 00:05:41.251 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.251 [2024-07-24 19:34:58.330718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.251 [2024-07-24 19:34:58.448801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.251 19:34:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.621 19:34:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.622 19:34:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:42.622 19:34:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.622 19:34:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.622 19:34:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.622 19:34:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.622 19:34:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:42.622 19:34:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.622 00:05:42.622 real 0m1.474s 00:05:42.622 user 0m1.335s 00:05:42.622 sys 0m0.140s 00:05:42.622 19:34:59 accel.accel_copy -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:42.622 19:34:59 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:42.622 ************************************ 00:05:42.622 END TEST accel_copy 00:05:42.622 ************************************ 00:05:42.622 19:34:59 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.622 19:34:59 accel -- common/autotest_common.sh@1102 -- # '[' 13 -le 1 ']' 00:05:42.622 19:34:59 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:42.622 19:34:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.622 ************************************ 00:05:42.622 START TEST accel_fill 00:05:42.622 ************************************ 00:05:42.622 19:34:59 accel.accel_fill -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.622 19:34:59 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:42.622 19:34:59 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:42.622 19:34:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.622 19:34:59 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.622 19:34:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.622 19:34:59 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:42.622 19:34:59 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:42.622 19:34:59 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.622 19:34:59 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.622 19:34:59 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.622 19:34:59 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.622 19:34:59 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.622 19:34:59 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:42.622 19:34:59 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:42.622 [2024-07-24 19:34:59.791557] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:42.622 [2024-07-24 19:34:59.791617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1060893 ] 00:05:42.622 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.622 [2024-07-24 19:34:59.851804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.622 [2024-07-24 19:34:59.970919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:42.879 19:35:00 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:44.249 19:35:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.249 00:05:44.249 real 0m1.474s 00:05:44.249 user 0m1.318s 00:05:44.249 sys 0m0.158s 00:05:44.249 19:35:01 accel.accel_fill -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:44.249 19:35:01 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:44.249 ************************************ 00:05:44.250 END TEST accel_fill 00:05:44.250 ************************************ 00:05:44.250 19:35:01 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:44.250 19:35:01 accel -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:05:44.250 19:35:01 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:44.250 19:35:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.250 ************************************ 00:05:44.250 START TEST accel_copy_crc32c 00:05:44.250 ************************************ 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w copy_crc32c -y 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:44.250 [2024-07-24 19:35:01.308570] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:44.250 [2024-07-24 19:35:01.308632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061129 ] 00:05:44.250 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.250 [2024-07-24 19:35:01.369939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.250 [2024-07-24 19:35:01.488573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.250 19:35:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.643 00:05:45.643 real 0m1.476s 00:05:45.643 user 0m1.336s 00:05:45.643 sys 0m0.143s 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:45.643 19:35:02 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:45.643 ************************************ 00:05:45.643 END TEST accel_copy_crc32c 00:05:45.643 ************************************ 00:05:45.643 19:35:02 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:45.643 19:35:02 accel -- common/autotest_common.sh@1102 -- # '[' 9 -le 1 ']' 00:05:45.643 19:35:02 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:45.643 19:35:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.643 ************************************ 00:05:45.643 START TEST accel_copy_crc32c_C2 00:05:45.643 ************************************ 00:05:45.643 19:35:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:45.643 19:35:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:45.643 19:35:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:45.643 19:35:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.643 19:35:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:45.643 19:35:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.643 19:35:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:45.643 19:35:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.643 19:35:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.643 19:35:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.643 19:35:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.643 19:35:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.643 19:35:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.643 19:35:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:45.643 19:35:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:45.643 [2024-07-24 19:35:02.833707] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:45.643 [2024-07-24 19:35:02.833783] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061292 ] 00:05:45.643 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.643 [2024-07-24 19:35:02.893663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.643 [2024-07-24 19:35:03.009468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:45.901 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:45.902 19:35:03 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.272 00:05:47.272 real 0m1.468s 00:05:47.272 user 0m1.326s 00:05:47.272 sys 0m0.145s 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:47.272 19:35:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:47.272 ************************************ 00:05:47.272 END TEST accel_copy_crc32c_C2 00:05:47.272 ************************************ 00:05:47.272 19:35:04 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:47.272 19:35:04 accel -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:05:47.272 19:35:04 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:47.272 19:35:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.272 ************************************ 00:05:47.272 START TEST accel_dualcast 00:05:47.272 ************************************ 00:05:47.272 19:35:04 accel.accel_dualcast -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w dualcast -y 00:05:47.272 19:35:04 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:47.272 19:35:04 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:47.272 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.272 19:35:04 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:47.273 [2024-07-24 19:35:04.347427] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:47.273 [2024-07-24 19:35:04.347487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061522 ] 00:05:47.273 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.273 [2024-07-24 19:35:04.409046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.273 [2024-07-24 19:35:04.526725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:47.273 19:35:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:48.645 19:35:05 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.645 00:05:48.645 real 0m1.471s 00:05:48.645 user 0m1.330s 00:05:48.645 sys 0m0.141s 00:05:48.645 19:35:05 accel.accel_dualcast -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:48.645 19:35:05 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:48.645 ************************************ 00:05:48.645 END TEST accel_dualcast 00:05:48.645 ************************************ 00:05:48.645 19:35:05 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:48.645 19:35:05 accel -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:05:48.645 19:35:05 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:48.645 19:35:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.645 ************************************ 00:05:48.645 START TEST accel_compare 00:05:48.645 ************************************ 00:05:48.645 19:35:05 accel.accel_compare -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w compare -y 00:05:48.645 19:35:05 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:48.645 19:35:05 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:48.645 19:35:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.645 19:35:05 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:48.645 19:35:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.645 19:35:05 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:48.645 19:35:05 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:48.645 19:35:05 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.645 19:35:05 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.645 19:35:05 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.645 19:35:05 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.645 19:35:05 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.645 19:35:05 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:48.645 19:35:05 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:48.645 [2024-07-24 19:35:05.865744] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:48.645 [2024-07-24 19:35:05.865807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061723 ] 00:05:48.645 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.645 [2024-07-24 19:35:05.927176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.903 [2024-07-24 19:35:06.046124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.903 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:48.904 19:35:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.318 19:35:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.318 19:35:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.318 19:35:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.318 19:35:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.318 19:35:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.318 19:35:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.318 19:35:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.318 19:35:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:50.319 19:35:07 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.319 00:05:50.319 real 0m1.480s 00:05:50.319 user 0m1.339s 00:05:50.319 sys 0m0.144s 00:05:50.319 19:35:07 accel.accel_compare -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:50.319 19:35:07 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:50.319 ************************************ 00:05:50.319 END TEST accel_compare 00:05:50.319 ************************************ 00:05:50.319 19:35:07 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:50.319 19:35:07 accel -- common/autotest_common.sh@1102 -- # '[' 7 -le 1 ']' 00:05:50.319 19:35:07 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:50.319 19:35:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.319 ************************************ 00:05:50.319 START TEST accel_xor 00:05:50.319 ************************************ 00:05:50.319 19:35:07 accel.accel_xor -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w xor -y 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:50.319 [2024-07-24 19:35:07.393379] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:50.319 [2024-07-24 19:35:07.393452] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1061876 ] 00:05:50.319 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.319 [2024-07-24 19:35:07.455312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.319 [2024-07-24 19:35:07.569296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:50.319 19:35:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.691 19:35:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.692 00:05:51.692 real 0m1.471s 00:05:51.692 user 0m1.320s 00:05:51.692 sys 0m0.153s 00:05:51.692 19:35:08 accel.accel_xor -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:51.692 19:35:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:51.692 ************************************ 00:05:51.692 END TEST accel_xor 00:05:51.692 ************************************ 00:05:51.692 19:35:08 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:51.692 19:35:08 accel -- common/autotest_common.sh@1102 -- # '[' 9 -le 1 ']' 00:05:51.692 19:35:08 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:51.692 19:35:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.692 ************************************ 00:05:51.692 START TEST accel_xor 00:05:51.692 ************************************ 00:05:51.692 19:35:08 accel.accel_xor -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w xor -y -x 3 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:51.692 19:35:08 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:51.692 [2024-07-24 19:35:08.911773] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:51.692 [2024-07-24 19:35:08.911837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062126 ] 00:05:51.692 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.692 [2024-07-24 19:35:08.975532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.950 [2024-07-24 19:35:09.097195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:51.950 19:35:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:53.323 19:35:10 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.323 00:05:53.323 real 0m1.483s 00:05:53.323 user 0m1.336s 00:05:53.323 sys 0m0.150s 00:05:53.323 19:35:10 accel.accel_xor -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:53.323 19:35:10 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:53.323 ************************************ 00:05:53.323 END TEST accel_xor 00:05:53.323 ************************************ 00:05:53.323 19:35:10 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:53.323 19:35:10 accel -- common/autotest_common.sh@1102 -- # '[' 6 -le 1 ']' 00:05:53.323 19:35:10 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:53.323 19:35:10 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.323 ************************************ 00:05:53.323 START TEST accel_dif_verify 00:05:53.323 ************************************ 00:05:53.323 19:35:10 accel.accel_dif_verify -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w dif_verify 00:05:53.323 19:35:10 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:53.323 19:35:10 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:53.324 [2024-07-24 19:35:10.443694] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:53.324 [2024-07-24 19:35:10.443759] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062311 ] 00:05:53.324 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.324 [2024-07-24 19:35:10.506314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.324 [2024-07-24 19:35:10.629061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:53.324 19:35:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.696 19:35:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.696 19:35:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.696 19:35:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.696 19:35:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:54.697 19:35:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.697 00:05:54.697 real 0m1.490s 00:05:54.697 user 0m1.350s 00:05:54.697 sys 0m0.144s 00:05:54.697 19:35:11 accel.accel_dif_verify -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:54.697 19:35:11 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:54.697 ************************************ 00:05:54.697 END TEST accel_dif_verify 00:05:54.697 ************************************ 00:05:54.697 19:35:11 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:54.697 19:35:11 accel -- common/autotest_common.sh@1102 -- # '[' 6 -le 1 ']' 00:05:54.697 19:35:11 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:54.697 19:35:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.697 ************************************ 00:05:54.697 START TEST accel_dif_generate 00:05:54.697 ************************************ 00:05:54.697 19:35:11 accel.accel_dif_generate -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w dif_generate 00:05:54.697 19:35:11 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:54.697 19:35:11 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:54.697 19:35:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.697 19:35:11 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:54.697 19:35:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.697 19:35:11 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:54.697 19:35:11 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:54.697 19:35:11 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.697 19:35:11 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.697 19:35:11 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.697 19:35:11 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.697 19:35:11 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.697 19:35:11 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:54.697 19:35:11 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:54.697 [2024-07-24 19:35:11.986156] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:54.697 [2024-07-24 19:35:11.986219] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062470 ] 00:05:54.697 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.697 [2024-07-24 19:35:12.049063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.955 [2024-07-24 19:35:12.173974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.955 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:54.956 19:35:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:56.326 19:35:13 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.326 00:05:56.326 real 0m1.476s 00:05:56.326 user 0m1.328s 00:05:56.326 sys 0m0.151s 00:05:56.326 19:35:13 accel.accel_dif_generate -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:56.326 19:35:13 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:56.326 ************************************ 00:05:56.326 END TEST accel_dif_generate 00:05:56.326 ************************************ 00:05:56.326 19:35:13 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:56.326 19:35:13 accel -- common/autotest_common.sh@1102 -- # '[' 6 -le 1 ']' 00:05:56.326 19:35:13 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:56.326 19:35:13 accel -- common/autotest_common.sh@10 -- # set +x 00:05:56.326 ************************************ 00:05:56.326 START TEST accel_dif_generate_copy 00:05:56.326 ************************************ 00:05:56.326 19:35:13 accel.accel_dif_generate_copy -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w dif_generate_copy 00:05:56.326 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:56.326 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:56.326 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.326 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:56.326 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.326 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:56.326 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:56.326 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:56.326 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:56.326 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.326 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.326 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:56.326 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:56.326 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:56.326 [2024-07-24 19:35:13.511329] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:56.326 [2024-07-24 19:35:13.511395] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062741 ] 00:05:56.326 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.327 [2024-07-24 19:35:13.573345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.327 [2024-07-24 19:35:13.693752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:56.583 19:35:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.954 00:05:57.954 real 0m1.479s 00:05:57.954 user 0m1.336s 00:05:57.954 sys 0m0.146s 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:57.954 19:35:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:57.954 ************************************ 00:05:57.954 END TEST accel_dif_generate_copy 00:05:57.954 ************************************ 00:05:57.954 19:35:14 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:57.954 19:35:14 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.954 19:35:14 accel -- common/autotest_common.sh@1102 -- # '[' 8 -le 1 ']' 00:05:57.954 19:35:14 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:57.954 19:35:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.954 ************************************ 00:05:57.954 START TEST accel_comp 00:05:57.954 ************************************ 00:05:57.954 19:35:15 accel.accel_comp -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:57.954 [2024-07-24 19:35:15.039601] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:57.954 [2024-07-24 19:35:15.039667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1062905 ] 00:05:57.954 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.954 [2024-07-24 19:35:15.102751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.954 [2024-07-24 19:35:15.225613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:57.954 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:57.955 19:35:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:59.326 19:35:16 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:59.326 00:05:59.326 real 0m1.496s 00:05:59.326 user 0m1.352s 00:05:59.326 sys 0m0.148s 00:05:59.326 19:35:16 accel.accel_comp -- common/autotest_common.sh@1127 -- # xtrace_disable 00:05:59.326 19:35:16 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:59.326 ************************************ 00:05:59.326 END TEST accel_comp 00:05:59.326 ************************************ 00:05:59.326 19:35:16 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:59.326 19:35:16 accel -- common/autotest_common.sh@1102 -- # '[' 9 -le 1 ']' 00:05:59.326 19:35:16 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:05:59.326 19:35:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:59.326 ************************************ 00:05:59.326 START TEST accel_decomp 00:05:59.326 ************************************ 00:05:59.326 19:35:16 accel.accel_decomp -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:59.326 19:35:16 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:59.326 19:35:16 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:59.326 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.326 19:35:16 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:59.326 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.326 19:35:16 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:59.326 19:35:16 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:59.326 19:35:16 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:59.326 19:35:16 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:59.326 19:35:16 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.326 19:35:16 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.326 19:35:16 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:59.326 19:35:16 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:59.326 19:35:16 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:59.326 [2024-07-24 19:35:16.581737] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:05:59.326 [2024-07-24 19:35:16.581803] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063056 ] 00:05:59.326 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.326 [2024-07-24 19:35:16.644249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.584 [2024-07-24 19:35:16.771363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:59.584 19:35:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:59.585 19:35:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:59.585 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:59.585 19:35:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:00.955 19:35:18 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.955 00:06:00.955 real 0m1.483s 00:06:00.955 user 0m1.340s 00:06:00.955 sys 0m0.146s 00:06:00.955 19:35:18 accel.accel_decomp -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:00.955 19:35:18 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:00.955 ************************************ 00:06:00.955 END TEST accel_decomp 00:06:00.955 ************************************ 00:06:00.955 19:35:18 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.955 19:35:18 accel -- common/autotest_common.sh@1102 -- # '[' 11 -le 1 ']' 00:06:00.955 19:35:18 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:00.955 19:35:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.955 ************************************ 00:06:00.955 START TEST accel_decomp_full 00:06:00.955 ************************************ 00:06:00.955 19:35:18 accel.accel_decomp_full -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.955 19:35:18 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:00.955 19:35:18 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:00.955 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:00.955 19:35:18 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.955 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:00.956 19:35:18 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:00.956 19:35:18 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:00.956 19:35:18 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.956 19:35:18 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.956 19:35:18 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.956 19:35:18 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.956 19:35:18 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.956 19:35:18 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:00.956 19:35:18 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:00.956 [2024-07-24 19:35:18.116486] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:06:00.956 [2024-07-24 19:35:18.116564] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063335 ] 00:06:00.956 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.956 [2024-07-24 19:35:18.182838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.956 [2024-07-24 19:35:18.302609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:01.214 19:35:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:02.585 19:35:19 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.586 19:35:19 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:02.586 19:35:19 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.586 00:06:02.586 real 0m1.509s 00:06:02.586 user 0m1.362s 00:06:02.586 sys 0m0.150s 00:06:02.586 19:35:19 accel.accel_decomp_full -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:02.586 19:35:19 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:02.586 ************************************ 00:06:02.586 END TEST accel_decomp_full 00:06:02.586 ************************************ 00:06:02.586 19:35:19 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:02.586 19:35:19 accel -- common/autotest_common.sh@1102 -- # '[' 11 -le 1 ']' 00:06:02.586 19:35:19 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:02.586 19:35:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.586 ************************************ 00:06:02.586 START TEST accel_decomp_mcore 00:06:02.586 ************************************ 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:02.586 [2024-07-24 19:35:19.669714] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:06:02.586 [2024-07-24 19:35:19.669776] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063491 ] 00:06:02.586 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.586 [2024-07-24 19:35:19.733434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:02.586 [2024-07-24 19:35:19.859714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.586 [2024-07-24 19:35:19.859770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.586 [2024-07-24 19:35:19.859823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.586 [2024-07-24 19:35:19.859826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:02.586 19:35:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.958 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:03.959 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:03.959 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.959 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.959 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:03.959 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:03.959 19:35:21 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.959 00:06:03.959 real 0m1.499s 00:06:03.959 user 0m4.803s 00:06:03.959 sys 0m0.159s 00:06:03.959 19:35:21 accel.accel_decomp_mcore -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:03.959 19:35:21 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:03.959 ************************************ 00:06:03.959 END TEST accel_decomp_mcore 00:06:03.959 ************************************ 00:06:03.959 19:35:21 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:03.959 19:35:21 accel -- common/autotest_common.sh@1102 -- # '[' 13 -le 1 ']' 00:06:03.959 19:35:21 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:03.959 19:35:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.959 ************************************ 00:06:03.959 START TEST accel_decomp_full_mcore 00:06:03.959 ************************************ 00:06:03.959 19:35:21 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:03.959 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:03.959 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:03.959 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:03.959 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:03.959 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:03.959 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:03.959 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:03.959 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.959 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.959 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.959 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.959 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.959 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:03.959 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:03.959 [2024-07-24 19:35:21.217384] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:06:03.959 [2024-07-24 19:35:21.217456] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063653 ] 00:06:03.959 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.959 [2024-07-24 19:35:21.281179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.217 [2024-07-24 19:35:21.407699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.217 [2024-07-24 19:35:21.407748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.217 [2024-07-24 19:35:21.407804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.217 [2024-07-24 19:35:21.407808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.217 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:04.218 19:35:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.590 00:06:05.590 real 0m1.498s 00:06:05.590 user 0m4.824s 00:06:05.590 sys 0m0.147s 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:05.590 19:35:22 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:05.590 ************************************ 00:06:05.590 END TEST accel_decomp_full_mcore 00:06:05.590 ************************************ 00:06:05.590 19:35:22 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:05.590 19:35:22 accel -- common/autotest_common.sh@1102 -- # '[' 11 -le 1 ']' 00:06:05.590 19:35:22 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:05.590 19:35:22 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.590 ************************************ 00:06:05.590 START TEST accel_decomp_mthread 00:06:05.590 ************************************ 00:06:05.590 19:35:22 accel.accel_decomp_mthread -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:05.590 19:35:22 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:05.590 19:35:22 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:05.590 19:35:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.590 19:35:22 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:05.590 19:35:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.590 19:35:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:05.590 19:35:22 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:05.590 19:35:22 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.590 19:35:22 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.590 19:35:22 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.590 19:35:22 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.590 19:35:22 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.590 19:35:22 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:05.590 19:35:22 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:05.590 [2024-07-24 19:35:22.762881] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:06:05.590 [2024-07-24 19:35:22.762948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063929 ] 00:06:05.590 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.590 [2024-07-24 19:35:22.825216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.590 [2024-07-24 19:35:22.948854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:05.849 19:35:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.227 00:06:07.227 real 0m1.498s 00:06:07.227 user 0m1.357s 00:06:07.227 sys 0m0.143s 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:07.227 19:35:24 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:07.227 ************************************ 00:06:07.227 END TEST accel_decomp_mthread 00:06:07.227 ************************************ 00:06:07.228 19:35:24 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:07.228 19:35:24 accel -- common/autotest_common.sh@1102 -- # '[' 13 -le 1 ']' 00:06:07.228 19:35:24 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:07.228 19:35:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.228 ************************************ 00:06:07.228 START TEST accel_decomp_full_mthread 00:06:07.228 ************************************ 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1126 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:07.228 [2024-07-24 19:35:24.307618] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:06:07.228 [2024-07-24 19:35:24.307688] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064091 ] 00:06:07.228 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.228 [2024-07-24 19:35:24.369948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.228 [2024-07-24 19:35:24.493063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:07.228 19:35:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.615 00:06:08.615 real 0m1.530s 00:06:08.615 user 0m1.381s 00:06:08.615 sys 0m0.152s 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:08.615 19:35:25 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:08.615 ************************************ 00:06:08.615 END TEST accel_decomp_full_mthread 00:06:08.615 ************************************ 00:06:08.615 19:35:25 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:08.615 19:35:25 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:08.615 19:35:25 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:08.615 19:35:25 accel -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:06:08.615 19:35:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.615 19:35:25 accel -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:08.615 19:35:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.615 19:35:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.615 19:35:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.615 19:35:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.615 19:35:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.615 19:35:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:08.615 19:35:25 accel -- accel/accel.sh@41 -- # jq -r . 00:06:08.615 ************************************ 00:06:08.615 START TEST accel_dif_functional_tests 00:06:08.615 ************************************ 00:06:08.615 19:35:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:08.615 [2024-07-24 19:35:25.908214] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:06:08.615 [2024-07-24 19:35:25.908312] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064252 ] 00:06:08.615 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.615 [2024-07-24 19:35:25.969394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.911 [2024-07-24 19:35:26.096777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.911 [2024-07-24 19:35:26.096830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.911 [2024-07-24 19:35:26.096834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.911 00:06:08.911 00:06:08.911 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.911 http://cunit.sourceforge.net/ 00:06:08.911 00:06:08.911 00:06:08.911 Suite: accel_dif 00:06:08.911 Test: verify: DIF generated, GUARD check ...passed 00:06:08.911 Test: verify: DIF generated, APPTAG check ...passed 00:06:08.911 Test: verify: DIF generated, REFTAG check ...passed 00:06:08.911 Test: verify: DIF not generated, GUARD check ...[2024-07-24 19:35:26.193629] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:08.911 passed 00:06:08.911 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 19:35:26.193702] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:08.911 passed 00:06:08.911 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 19:35:26.193743] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:08.911 passed 00:06:08.911 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:08.911 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 19:35:26.193818] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:08.911 passed 00:06:08.911 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:08.911 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:08.911 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:08.911 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 19:35:26.193972] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:08.911 passed 00:06:08.911 Test: verify copy: DIF generated, GUARD check ...passed 00:06:08.911 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:08.911 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:08.911 Test: verify copy: DIF not generated, GUARD check ...[2024-07-24 19:35:26.194161] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:08.911 passed 00:06:08.911 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-24 19:35:26.194206] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:08.911 passed 00:06:08.911 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-24 19:35:26.194262] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:08.911 passed 00:06:08.911 Test: generate copy: DIF generated, GUARD check ...passed 00:06:08.911 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:08.911 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:08.911 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:08.911 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:08.911 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:08.911 Test: generate copy: iovecs-len validate ...[2024-07-24 19:35:26.194523] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:08.911 passed 00:06:08.911 Test: generate copy: buffer alignment validate ...passed 00:06:08.911 00:06:08.911 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.911 suites 1 1 n/a 0 0 00:06:08.911 tests 26 26 26 0 0 00:06:08.911 asserts 115 115 115 0 n/a 00:06:08.911 00:06:08.911 Elapsed time = 0.003 seconds 00:06:09.170 00:06:09.170 real 0m0.589s 00:06:09.170 user 0m0.867s 00:06:09.170 sys 0m0.186s 00:06:09.170 19:35:26 accel.accel_dif_functional_tests -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:09.170 19:35:26 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:09.170 ************************************ 00:06:09.170 END TEST accel_dif_functional_tests 00:06:09.170 ************************************ 00:06:09.170 00:06:09.170 real 0m33.608s 00:06:09.170 user 0m36.943s 00:06:09.170 sys 0m4.666s 00:06:09.170 19:35:26 accel -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:09.170 19:35:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.170 ************************************ 00:06:09.170 END TEST accel 00:06:09.170 ************************************ 00:06:09.170 19:35:26 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:09.170 19:35:26 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:06:09.170 19:35:26 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:09.170 19:35:26 -- common/autotest_common.sh@10 -- # set +x 00:06:09.170 ************************************ 00:06:09.170 START TEST accel_rpc 00:06:09.170 ************************************ 00:06:09.170 19:35:26 accel_rpc -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:09.427 * Looking for test storage... 00:06:09.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:09.427 19:35:26 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:09.427 19:35:26 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1064439 00:06:09.427 19:35:26 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:09.427 19:35:26 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1064439 00:06:09.427 19:35:26 accel_rpc -- common/autotest_common.sh@832 -- # '[' -z 1064439 ']' 00:06:09.427 19:35:26 accel_rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.427 19:35:26 accel_rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:06:09.427 19:35:26 accel_rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.427 19:35:26 accel_rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:06:09.427 19:35:26 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.427 [2024-07-24 19:35:26.633020] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:06:09.427 [2024-07-24 19:35:26.633106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064439 ] 00:06:09.427 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.427 [2024-07-24 19:35:26.690203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.427 [2024-07-24 19:35:26.796314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.684 19:35:26 accel_rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:06:09.684 19:35:26 accel_rpc -- common/autotest_common.sh@865 -- # return 0 00:06:09.684 19:35:26 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:09.684 19:35:26 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:09.684 19:35:26 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:09.684 19:35:26 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:09.684 19:35:26 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:09.684 19:35:26 accel_rpc -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:06:09.684 19:35:26 accel_rpc -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:09.684 19:35:26 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.684 ************************************ 00:06:09.684 START TEST accel_assign_opcode 00:06:09.684 ************************************ 00:06:09.684 19:35:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1126 -- # accel_assign_opcode_test_suite 00:06:09.684 19:35:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:09.684 19:35:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@562 -- # xtrace_disable 00:06:09.684 19:35:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:09.684 [2024-07-24 19:35:26.860902] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:09.684 19:35:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:06:09.684 19:35:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:09.684 19:35:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@562 -- # xtrace_disable 00:06:09.684 19:35:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:09.684 [2024-07-24 19:35:26.868892] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:09.684 19:35:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:06:09.684 19:35:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:09.684 19:35:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@562 -- # xtrace_disable 00:06:09.684 19:35:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:09.941 19:35:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:06:09.941 19:35:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:09.941 19:35:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@562 -- # xtrace_disable 00:06:09.941 19:35:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:09.941 19:35:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:09.941 19:35:27 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:09.941 19:35:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:06:09.941 software 00:06:09.941 00:06:09.941 real 0m0.311s 00:06:09.941 user 0m0.040s 00:06:09.941 sys 0m0.006s 00:06:09.941 19:35:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:09.941 19:35:27 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:09.941 ************************************ 00:06:09.941 END TEST accel_assign_opcode 00:06:09.941 ************************************ 00:06:09.941 19:35:27 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1064439 00:06:09.941 19:35:27 accel_rpc -- common/autotest_common.sh@951 -- # '[' -z 1064439 ']' 00:06:09.942 19:35:27 accel_rpc -- common/autotest_common.sh@955 -- # kill -0 1064439 00:06:09.942 19:35:27 accel_rpc -- common/autotest_common.sh@956 -- # uname 00:06:09.942 19:35:27 accel_rpc -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:06:09.942 19:35:27 accel_rpc -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1064439 00:06:09.942 19:35:27 accel_rpc -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:06:09.942 19:35:27 accel_rpc -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:06:09.942 19:35:27 accel_rpc -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1064439' 00:06:09.942 killing process with pid 1064439 00:06:09.942 19:35:27 accel_rpc -- common/autotest_common.sh@970 -- # kill 1064439 00:06:09.942 19:35:27 accel_rpc -- common/autotest_common.sh@975 -- # wait 1064439 00:06:10.506 00:06:10.506 real 0m1.150s 00:06:10.506 user 0m1.079s 00:06:10.506 sys 0m0.415s 00:06:10.506 19:35:27 accel_rpc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:10.506 19:35:27 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.506 ************************************ 00:06:10.506 END TEST accel_rpc 00:06:10.506 ************************************ 00:06:10.506 19:35:27 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:10.506 19:35:27 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:06:10.506 19:35:27 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:10.506 19:35:27 -- common/autotest_common.sh@10 -- # set +x 00:06:10.506 ************************************ 00:06:10.506 START TEST app_cmdline 00:06:10.506 ************************************ 00:06:10.506 19:35:27 app_cmdline -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:10.506 * Looking for test storage... 00:06:10.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:10.506 19:35:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:10.506 19:35:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1064643 00:06:10.506 19:35:27 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:10.506 19:35:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1064643 00:06:10.506 19:35:27 app_cmdline -- common/autotest_common.sh@832 -- # '[' -z 1064643 ']' 00:06:10.506 19:35:27 app_cmdline -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.506 19:35:27 app_cmdline -- common/autotest_common.sh@837 -- # local max_retries=100 00:06:10.506 19:35:27 app_cmdline -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.506 19:35:27 app_cmdline -- common/autotest_common.sh@841 -- # xtrace_disable 00:06:10.506 19:35:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:10.506 [2024-07-24 19:35:27.842286] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:06:10.506 [2024-07-24 19:35:27.842380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064643 ] 00:06:10.506 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.764 [2024-07-24 19:35:27.909852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.764 [2024-07-24 19:35:28.031390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.021 19:35:28 app_cmdline -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:06:11.021 19:35:28 app_cmdline -- common/autotest_common.sh@865 -- # return 0 00:06:11.021 19:35:28 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:11.279 { 00:06:11.279 "version": "SPDK v24.09-pre git sha1 29c5e1f47", 00:06:11.279 "fields": { 00:06:11.279 "major": 24, 00:06:11.279 "minor": 9, 00:06:11.279 "patch": 0, 00:06:11.279 "suffix": "-pre", 00:06:11.279 "commit": "29c5e1f47" 00:06:11.279 } 00:06:11.279 } 00:06:11.279 19:35:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:11.279 19:35:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:11.279 19:35:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:11.279 19:35:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:11.279 19:35:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:11.279 19:35:28 app_cmdline -- common/autotest_common.sh@562 -- # xtrace_disable 00:06:11.279 19:35:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:11.279 19:35:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:11.279 19:35:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:11.279 19:35:28 app_cmdline -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:06:11.279 19:35:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:11.279 19:35:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:11.279 19:35:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.279 19:35:28 app_cmdline -- common/autotest_common.sh@651 -- # local es=0 00:06:11.279 19:35:28 app_cmdline -- common/autotest_common.sh@653 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.279 19:35:28 app_cmdline -- common/autotest_common.sh@639 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.279 19:35:28 app_cmdline -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:06:11.279 19:35:28 app_cmdline -- common/autotest_common.sh@643 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.279 19:35:28 app_cmdline -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:06:11.279 19:35:28 app_cmdline -- common/autotest_common.sh@645 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.279 19:35:28 app_cmdline -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:06:11.279 19:35:28 app_cmdline -- common/autotest_common.sh@645 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:11.279 19:35:28 app_cmdline -- common/autotest_common.sh@645 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:11.279 19:35:28 app_cmdline -- common/autotest_common.sh@654 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.536 request: 00:06:11.536 { 00:06:11.536 "method": "env_dpdk_get_mem_stats", 00:06:11.536 "req_id": 1 00:06:11.536 } 00:06:11.536 Got JSON-RPC error response 00:06:11.536 response: 00:06:11.536 { 00:06:11.536 "code": -32601, 00:06:11.536 "message": "Method not found" 00:06:11.536 } 00:06:11.536 19:35:28 app_cmdline -- common/autotest_common.sh@654 -- # es=1 00:06:11.536 19:35:28 app_cmdline -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:06:11.536 19:35:28 app_cmdline -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:06:11.536 19:35:28 app_cmdline -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:06:11.536 19:35:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1064643 00:06:11.536 19:35:28 app_cmdline -- common/autotest_common.sh@951 -- # '[' -z 1064643 ']' 00:06:11.536 19:35:28 app_cmdline -- common/autotest_common.sh@955 -- # kill -0 1064643 00:06:11.536 19:35:28 app_cmdline -- common/autotest_common.sh@956 -- # uname 00:06:11.536 19:35:28 app_cmdline -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:06:11.536 19:35:28 app_cmdline -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1064643 00:06:11.536 19:35:28 app_cmdline -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:06:11.536 19:35:28 app_cmdline -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:06:11.536 19:35:28 app_cmdline -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1064643' 00:06:11.536 killing process with pid 1064643 00:06:11.536 19:35:28 app_cmdline -- common/autotest_common.sh@970 -- # kill 1064643 00:06:11.536 19:35:28 app_cmdline -- common/autotest_common.sh@975 -- # wait 1064643 00:06:12.101 00:06:12.101 real 0m1.601s 00:06:12.101 user 0m1.930s 00:06:12.101 sys 0m0.484s 00:06:12.101 19:35:29 app_cmdline -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:12.101 19:35:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.101 ************************************ 00:06:12.101 END TEST app_cmdline 00:06:12.101 ************************************ 00:06:12.101 19:35:29 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:12.101 19:35:29 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:06:12.101 19:35:29 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:12.101 19:35:29 -- common/autotest_common.sh@10 -- # set +x 00:06:12.101 ************************************ 00:06:12.101 START TEST version 00:06:12.101 ************************************ 00:06:12.101 19:35:29 version -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:12.101 * Looking for test storage... 00:06:12.101 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:12.101 19:35:29 version -- app/version.sh@17 -- # get_header_version major 00:06:12.101 19:35:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.101 19:35:29 version -- app/version.sh@14 -- # cut -f2 00:06:12.101 19:35:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.101 19:35:29 version -- app/version.sh@17 -- # major=24 00:06:12.101 19:35:29 version -- app/version.sh@18 -- # get_header_version minor 00:06:12.101 19:35:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.101 19:35:29 version -- app/version.sh@14 -- # cut -f2 00:06:12.101 19:35:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.101 19:35:29 version -- app/version.sh@18 -- # minor=9 00:06:12.101 19:35:29 version -- app/version.sh@19 -- # get_header_version patch 00:06:12.101 19:35:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.101 19:35:29 version -- app/version.sh@14 -- # cut -f2 00:06:12.101 19:35:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.101 19:35:29 version -- app/version.sh@19 -- # patch=0 00:06:12.101 19:35:29 version -- app/version.sh@20 -- # get_header_version suffix 00:06:12.101 19:35:29 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:12.101 19:35:29 version -- app/version.sh@14 -- # cut -f2 00:06:12.101 19:35:29 version -- app/version.sh@14 -- # tr -d '"' 00:06:12.101 19:35:29 version -- app/version.sh@20 -- # suffix=-pre 00:06:12.101 19:35:29 version -- app/version.sh@22 -- # version=24.9 00:06:12.101 19:35:29 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:12.101 19:35:29 version -- app/version.sh@28 -- # version=24.9rc0 00:06:12.101 19:35:29 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:12.101 19:35:29 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:12.359 19:35:29 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:12.359 19:35:29 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:12.359 00:06:12.359 real 0m0.109s 00:06:12.359 user 0m0.056s 00:06:12.359 sys 0m0.074s 00:06:12.359 19:35:29 version -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:12.359 19:35:29 version -- common/autotest_common.sh@10 -- # set +x 00:06:12.359 ************************************ 00:06:12.359 END TEST version 00:06:12.359 ************************************ 00:06:12.359 19:35:29 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:12.359 19:35:29 -- spdk/autotest.sh@198 -- # uname -s 00:06:12.359 19:35:29 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:12.359 19:35:29 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:12.359 19:35:29 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:12.359 19:35:29 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:12.359 19:35:29 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:12.359 19:35:29 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:12.359 19:35:29 -- common/autotest_common.sh@731 -- # xtrace_disable 00:06:12.359 19:35:29 -- common/autotest_common.sh@10 -- # set +x 00:06:12.359 19:35:29 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:12.359 19:35:29 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:12.359 19:35:29 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:12.359 19:35:29 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:12.359 19:35:29 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:12.359 19:35:29 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:12.359 19:35:29 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:12.359 19:35:29 -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:06:12.359 19:35:29 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:12.359 19:35:29 -- common/autotest_common.sh@10 -- # set +x 00:06:12.359 ************************************ 00:06:12.359 START TEST nvmf_tcp 00:06:12.359 ************************************ 00:06:12.359 19:35:29 nvmf_tcp -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:12.359 * Looking for test storage... 00:06:12.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:12.359 19:35:29 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:12.359 19:35:29 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:12.359 19:35:29 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:12.359 19:35:29 nvmf_tcp -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:06:12.359 19:35:29 nvmf_tcp -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:12.359 19:35:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.359 ************************************ 00:06:12.359 START TEST nvmf_target_core 00:06:12.359 ************************************ 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:12.360 * Looking for test storage... 00:06:12.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:12.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:12.360 ************************************ 00:06:12.360 START TEST nvmf_abort 00:06:12.360 ************************************ 00:06:12.360 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:12.619 * Looking for test storage... 00:06:12.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:12.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@452 -- # prepare_net_devs 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # local -g is_hw=no 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # remove_spdk_ns 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # xtrace_disable 00:06:12.619 19:35:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # pci_devs=() 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -a pci_devs 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # pci_net_devs=() 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # pci_drivers=() 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -A pci_drivers 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # net_devs=() 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # local -ga net_devs 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # e810=() 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # local -ga e810 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # x722=() 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # local -ga x722 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # mlx=() 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # local -ga mlx 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:14.517 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:14.517 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # [[ up == up ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:14.517 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # [[ up == up ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:14.517 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # is_hw=yes 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:06:14.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:14.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:06:14.517 00:06:14.517 --- 10.0.0.2 ping statistics --- 00:06:14.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.517 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:14.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:14.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:06:14.517 00:06:14.517 --- 10.0.0.1 ping statistics --- 00:06:14.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.517 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # return 0 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:14.517 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:06:14.518 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:06:14.518 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:14.518 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:06:14.518 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@725 -- # xtrace_disable 00:06:14.518 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:14.518 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@485 -- # nvmfpid=1066682 00:06:14.518 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:14.518 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@486 -- # waitforlisten 1066682 00:06:14.518 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@832 -- # '[' -z 1066682 ']' 00:06:14.518 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.518 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local max_retries=100 00:06:14.518 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.518 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@841 -- # xtrace_disable 00:06:14.518 19:35:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:14.518 [2024-07-24 19:35:31.867955] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:06:14.518 [2024-07-24 19:35:31.868047] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:14.774 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.774 [2024-07-24 19:35:31.934708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.774 [2024-07-24 19:35:32.045716] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:14.774 [2024-07-24 19:35:32.045759] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:14.774 [2024-07-24 19:35:32.045788] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:14.774 [2024-07-24 19:35:32.045800] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:14.775 [2024-07-24 19:35:32.045809] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:14.775 [2024-07-24 19:35:32.046133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.775 [2024-07-24 19:35:32.046189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.775 [2024-07-24 19:35:32.046192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@865 -- # return 0 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@731 -- # xtrace_disable 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.032 [2024-07-24 19:35:32.189164] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.032 Malloc0 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.032 Delay0 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.032 [2024-07-24 19:35:32.259414] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:06:15.032 19:35:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:15.032 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.032 [2024-07-24 19:35:32.407355] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:17.559 Initializing NVMe Controllers 00:06:17.559 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:17.559 controller IO queue size 128 less than required 00:06:17.559 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:17.559 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:17.559 Initialization complete. Launching workers. 00:06:17.559 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33899 00:06:17.559 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33960, failed to submit 62 00:06:17.559 success 33903, unsuccess 57, failed 0 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # nvmfcleanup 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:17.559 rmmod nvme_tcp 00:06:17.559 rmmod nvme_fabrics 00:06:17.559 rmmod nvme_keyring 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # '[' -n 1066682 ']' 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # killprocess 1066682 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@951 -- # '[' -z 1066682 ']' 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # kill -0 1066682 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # uname 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1066682 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1066682' 00:06:17.559 killing process with pid 1066682 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # kill 1066682 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@975 -- # wait 1066682 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@282 -- # remove_spdk_ns 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:17.559 19:35:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.090 19:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:06:20.090 00:06:20.090 real 0m7.214s 00:06:20.090 user 0m10.678s 00:06:20.090 sys 0m2.427s 00:06:20.090 19:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # xtrace_disable 00:06:20.090 19:35:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:20.090 ************************************ 00:06:20.090 END TEST nvmf_abort 00:06:20.090 ************************************ 00:06:20.090 19:35:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:20.090 19:35:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:06:20.090 19:35:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # xtrace_disable 00:06:20.090 19:35:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:20.090 ************************************ 00:06:20.090 START TEST nvmf_ns_hotplug_stress 00:06:20.090 ************************************ 00:06:20.090 19:35:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:20.090 * Looking for test storage... 00:06:20.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:20.090 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:20.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@452 -- # prepare_net_devs 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # local -g is_hw=no 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # remove_spdk_ns 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # xtrace_disable 00:06:20.091 19:35:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # pci_devs=() 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -a pci_devs 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # pci_net_devs=() 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # pci_drivers=() 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -A pci_drivers 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # net_devs=() 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # local -ga net_devs 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # e810=() 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # local -ga e810 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # x722=() 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # local -ga x722 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # mlx=() 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # local -ga mlx 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:21.996 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:21.996 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # [[ up == up ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:21.996 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # [[ up == up ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:21.996 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # is_hw=yes 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:06:21.996 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:06:21.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:21.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:06:21.997 00:06:21.997 --- 10.0.0.2 ping statistics --- 00:06:21.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.997 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:21.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:21.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:06:21.997 00:06:21.997 --- 10.0.0.1 ping statistics --- 00:06:21.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.997 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # return 0 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@725 -- # xtrace_disable 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # nvmfpid=1068921 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # waitforlisten 1068921 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # '[' -z 1068921 ']' 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local max_retries=100 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@841 -- # xtrace_disable 00:06:21.997 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:21.997 [2024-07-24 19:35:39.286496] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:06:21.997 [2024-07-24 19:35:39.286594] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.997 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.997 [2024-07-24 19:35:39.351714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:22.255 [2024-07-24 19:35:39.463651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:22.255 [2024-07-24 19:35:39.463704] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:22.255 [2024-07-24 19:35:39.463732] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.255 [2024-07-24 19:35:39.463743] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.255 [2024-07-24 19:35:39.463752] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:22.255 [2024-07-24 19:35:39.463833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:22.255 [2024-07-24 19:35:39.463897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:22.255 [2024-07-24 19:35:39.463900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.255 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:06:22.255 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@865 -- # return 0 00:06:22.255 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:06:22.255 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@731 -- # xtrace_disable 00:06:22.255 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:22.255 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:22.255 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:22.255 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:22.512 [2024-07-24 19:35:39.876686] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.770 19:35:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:23.027 19:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:23.027 [2024-07-24 19:35:40.405971] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:23.285 19:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:23.543 19:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:23.543 Malloc0 00:06:23.800 19:35:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:23.800 Delay0 00:06:24.057 19:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.058 19:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:24.315 NULL1 00:06:24.315 19:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:24.573 19:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1069330 00:06:24.573 19:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:24.573 19:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:24.573 19:35:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.830 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.830 19:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.088 19:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:25.088 19:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:25.345 true 00:06:25.345 19:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:25.345 19:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.602 19:35:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.859 19:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:25.859 19:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:26.126 true 00:06:26.126 19:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:26.126 19:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.389 19:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.675 19:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:26.675 19:35:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:26.932 true 00:06:26.932 19:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:26.932 19:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.189 19:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.447 19:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:27.447 19:35:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:27.704 true 00:06:27.704 19:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:27.704 19:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.961 19:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.218 19:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:28.218 19:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:28.475 true 00:06:28.475 19:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:28.475 19:35:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.733 19:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.990 19:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:28.990 19:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:29.247 true 00:06:29.247 19:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:29.247 19:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.504 19:35:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.761 19:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:29.761 19:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:30.018 true 00:06:30.018 19:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:30.018 19:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.276 19:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.533 19:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:30.533 19:35:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:30.790 true 00:06:30.790 19:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:30.790 19:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.047 19:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.304 19:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:31.304 19:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:31.560 true 00:06:31.560 19:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:31.560 19:35:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.817 19:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.074 19:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:32.074 19:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:32.331 true 00:06:32.331 19:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:32.331 19:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.588 19:35:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.845 19:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:32.845 19:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:33.102 true 00:06:33.102 19:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:33.102 19:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.359 19:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.616 19:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:33.616 19:35:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:33.874 true 00:06:33.874 19:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:33.874 19:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.131 19:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.387 19:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:34.387 19:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:34.643 true 00:06:34.643 19:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:34.643 19:35:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.899 19:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.155 19:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:35.155 19:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:35.412 true 00:06:35.412 19:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:35.412 19:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.734 19:35:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.992 19:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:35.992 19:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:36.249 true 00:06:36.249 19:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:36.249 19:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.505 19:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.763 19:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:36.763 19:35:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:37.020 true 00:06:37.020 19:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:37.020 19:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.277 19:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.534 19:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:37.534 19:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:37.791 true 00:06:37.791 19:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:37.791 19:35:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.048 19:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.305 19:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:38.305 19:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:38.562 true 00:06:38.562 19:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:38.562 19:35:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.818 19:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.076 19:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:39.076 19:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:39.333 true 00:06:39.333 19:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:39.333 19:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.589 19:35:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.845 19:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:39.845 19:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:40.102 true 00:06:40.102 19:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:40.102 19:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.359 19:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.616 19:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:40.616 19:35:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:40.872 true 00:06:40.872 19:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:40.872 19:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.131 19:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.389 19:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:41.389 19:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:41.692 true 00:06:41.692 19:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:41.692 19:35:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.950 19:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.206 19:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:42.206 19:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:42.463 true 00:06:42.463 19:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:42.463 19:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.720 19:35:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.977 19:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:42.977 19:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:42.977 true 00:06:43.233 19:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:43.234 19:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.490 19:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.747 19:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:43.747 19:36:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:44.004 true 00:06:44.004 19:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:44.004 19:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.261 19:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.519 19:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:44.519 19:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:44.776 true 00:06:44.776 19:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:44.776 19:36:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.034 19:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.291 19:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:45.292 19:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:45.548 true 00:06:45.548 19:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:45.548 19:36:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.804 19:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.061 19:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:46.061 19:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:46.319 true 00:06:46.319 19:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:46.319 19:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.576 19:36:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.833 19:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:46.833 19:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:47.090 true 00:06:47.090 19:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:47.090 19:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.348 19:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.605 19:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:47.605 19:36:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:47.862 true 00:06:47.862 19:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:47.862 19:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.119 19:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.377 19:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:48.377 19:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:48.633 true 00:06:48.633 19:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:48.633 19:36:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.889 19:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.146 19:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:49.146 19:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:49.402 true 00:06:49.402 19:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:49.402 19:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.659 19:36:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.915 19:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:49.915 19:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:50.172 true 00:06:50.172 19:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:50.172 19:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.428 19:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.684 19:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:50.684 19:36:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:50.941 true 00:06:50.941 19:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:50.941 19:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.198 19:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.455 19:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:51.455 19:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:51.711 true 00:06:51.711 19:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:51.711 19:36:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.968 19:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.225 19:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:52.225 19:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:52.482 true 00:06:52.482 19:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:52.483 19:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.740 19:36:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.997 19:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:52.997 19:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:53.253 true 00:06:53.253 19:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:53.253 19:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.510 19:36:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.766 19:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:53.766 19:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:54.023 true 00:06:54.023 19:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:54.023 19:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.280 19:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.537 19:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:54.537 19:36:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:54.795 true 00:06:54.795 19:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:54.795 19:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.052 19:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.052 Initializing NVMe Controllers 00:06:55.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:55.052 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:06:55.052 Controller IO queue size 128, less than required. 00:06:55.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:55.052 WARNING: Some requested NVMe devices were skipped 00:06:55.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:55.052 Initialization complete. Launching workers. 00:06:55.052 ======================================================== 00:06:55.052 Latency(us) 00:06:55.052 Device Information : IOPS MiB/s Average min max 00:06:55.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 21731.87 10.61 5890.24 3564.22 10297.08 00:06:55.052 ======================================================== 00:06:55.052 Total : 21731.87 10.61 5890.24 3564.22 10297.08 00:06:55.052 00:06:55.310 19:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:55.310 19:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:55.567 true 00:06:55.567 19:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1069330 00:06:55.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1069330) - No such process 00:06:55.567 19:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1069330 00:06:55.567 19:36:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.825 19:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:56.113 19:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:56.113 19:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:56.113 19:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:56.113 19:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.113 19:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:56.370 null0 00:06:56.370 19:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.370 19:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.370 19:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:56.370 null1 00:06:56.627 19:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.627 19:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.627 19:36:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:56.884 null2 00:06:56.884 19:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:56.884 19:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:56.884 19:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:57.142 null3 00:06:57.142 19:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:57.142 19:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:57.142 19:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:57.142 null4 00:06:57.399 19:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:57.399 19:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:57.399 19:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:57.399 null5 00:06:57.399 19:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:57.399 19:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:57.399 19:36:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:57.657 null6 00:06:57.657 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:57.657 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:57.657 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:57.916 null7 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:57.916 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1074064 1074065 1074067 1074069 1074071 1074073 1074075 1074077 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:57.917 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.175 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.175 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.175 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.432 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.432 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.432 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.432 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.432 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:58.690 19:36:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:58.948 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:58.948 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.948 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:58.948 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:58.948 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:58.948 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:58.948 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:58.948 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.206 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.464 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.464 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.464 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.464 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.464 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:59.464 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.464 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.464 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:59.722 19:36:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:59.980 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:59.980 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:59.980 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.980 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:59.980 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:59.980 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:59.980 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:59.980 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.238 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:00.496 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:00.496 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:00.496 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:00.496 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.496 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:00.496 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:00.496 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:00.496 19:36:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:00.754 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:01.012 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.012 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:01.012 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:01.012 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:01.012 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:01.012 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:01.012 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:01.012 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.270 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:01.528 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:01.528 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.528 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:01.528 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:01.528 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:01.528 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:01.528 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:01.528 19:36:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:01.786 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.786 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.786 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:01.786 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.786 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.786 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:01.786 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.786 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.786 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:01.786 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.786 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.787 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:01.787 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.787 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.787 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:01.787 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.787 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.787 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.787 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.787 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:01.787 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:01.787 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:01.787 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:01.787 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:02.045 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.045 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.045 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:02.045 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:02.045 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:02.045 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:02.045 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.045 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:02.303 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.303 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.303 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:02.303 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.303 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.303 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:02.303 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.303 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.303 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:02.303 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.303 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.303 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:02.561 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.561 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.561 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:02.561 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.561 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.561 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.561 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:02.561 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.561 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:02.561 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:02.561 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:02.561 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:02.819 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:02.819 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:02.819 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.819 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:02.819 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:02.819 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:02.819 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:02.819 19:36:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.077 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:03.334 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:03.334 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.334 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:03.334 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:03.334 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:03.334 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:03.334 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:03.334 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # nvmfcleanup 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:03.592 rmmod nvme_tcp 00:07:03.592 rmmod nvme_fabrics 00:07:03.592 rmmod nvme_keyring 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # '[' -n 1068921 ']' 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # killprocess 1068921 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' -z 1068921 ']' 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # kill -0 1068921 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # uname 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1068921 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1068921' 00:07:03.592 killing process with pid 1068921 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # kill 1068921 00:07:03.592 19:36:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@975 -- # wait 1068921 00:07:03.850 19:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:07:03.850 19:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:07:03.850 19:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:07:03.850 19:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.850 19:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@282 -- # remove_spdk_ns 00:07:03.850 19:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.850 19:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.850 19:36:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:07:06.378 00:07:06.378 real 0m46.222s 00:07:06.378 user 3m36.911s 00:07:06.378 sys 0m17.694s 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:06.378 ************************************ 00:07:06.378 END TEST nvmf_ns_hotplug_stress 00:07:06.378 ************************************ 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:06.378 ************************************ 00:07:06.378 START TEST nvmf_delete_subsystem 00:07:06.378 ************************************ 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:06.378 * Looking for test storage... 00:07:06.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:06.378 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:06.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@452 -- # prepare_net_devs 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # local -g is_hw=no 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # remove_spdk_ns 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # xtrace_disable 00:07:06.379 19:36:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.275 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:08.275 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # pci_devs=() 00:07:08.275 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -a pci_devs 00:07:08.275 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # pci_net_devs=() 00:07:08.275 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:07:08.275 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # pci_drivers=() 00:07:08.275 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -A pci_drivers 00:07:08.275 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # net_devs=() 00:07:08.275 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # local -ga net_devs 00:07:08.275 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # e810=() 00:07:08.275 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # local -ga e810 00:07:08.275 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # x722=() 00:07:08.275 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # local -ga x722 00:07:08.275 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # mlx=() 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # local -ga mlx 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:08.276 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:08.276 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # [[ up == up ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:08.276 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # [[ up == up ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:08.276 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # is_hw=yes 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:07:08.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:08.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:07:08.276 00:07:08.276 --- 10.0.0.2 ping statistics --- 00:07:08.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.276 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:08.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:08.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:07:08.276 00:07:08.276 --- 10.0.0.1 ping statistics --- 00:07:08.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:08.276 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # return 0 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@725 -- # xtrace_disable 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # nvmfpid=1076821 00:07:08.276 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:08.277 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # waitforlisten 1076821 00:07:08.277 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # '[' -z 1076821 ']' 00:07:08.277 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.277 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local max_retries=100 00:07:08.277 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.277 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@841 -- # xtrace_disable 00:07:08.277 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.277 [2024-07-24 19:36:25.439481] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:07:08.277 [2024-07-24 19:36:25.439580] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:08.277 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.277 [2024-07-24 19:36:25.503915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.277 [2024-07-24 19:36:25.613526] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:08.277 [2024-07-24 19:36:25.613610] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:08.277 [2024-07-24 19:36:25.613624] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:08.277 [2024-07-24 19:36:25.613635] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:08.277 [2024-07-24 19:36:25.613644] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:08.277 [2024-07-24 19:36:25.613696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.277 [2024-07-24 19:36:25.613701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@865 -- # return 0 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@731 -- # xtrace_disable 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.534 [2024-07-24 19:36:25.759964] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.534 [2024-07-24 19:36:25.776189] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.534 NULL1 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.534 Delay0 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1076854 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:08.534 19:36:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:08.534 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.534 [2024-07-24 19:36:25.850858] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:10.431 19:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:10.431 19:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:10.431 19:36:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 [2024-07-24 19:36:27.987118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181ac20 is same with the state(6) to be set 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 [2024-07-24 19:36:27.987917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a5c0 is same with the state(6) to be set 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 starting I/O failed: -6 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 [2024-07-24 19:36:27.988455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7feb98000c00 is same with the state(6) to be set 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Read completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.689 Write completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Write completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Write completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Write completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Write completed with error (sct=0, sc=8) 00:07:10.690 Write completed with error (sct=0, sc=8) 00:07:10.690 Write completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Write completed with error (sct=0, sc=8) 00:07:10.690 Write completed with error (sct=0, sc=8) 00:07:10.690 Write completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Write completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Write completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Write completed with error (sct=0, sc=8) 00:07:10.690 Write completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:10.690 Read completed with error (sct=0, sc=8) 00:07:11.622 [2024-07-24 19:36:28.947084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181bac0 is same with the state(6) to be set 00:07:11.622 Read completed with error (sct=0, sc=8) 00:07:11.622 Read completed with error (sct=0, sc=8) 00:07:11.622 Read completed with error (sct=0, sc=8) 00:07:11.622 Write completed with error (sct=0, sc=8) 00:07:11.622 Write completed with error (sct=0, sc=8) 00:07:11.622 Read completed with error (sct=0, sc=8) 00:07:11.622 Read completed with error (sct=0, sc=8) 00:07:11.622 Write completed with error (sct=0, sc=8) 00:07:11.622 Read completed with error (sct=0, sc=8) 00:07:11.622 Write completed with error (sct=0, sc=8) 00:07:11.622 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 [2024-07-24 19:36:28.990171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7feb9800d000 is same with the state(6) to be set 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 [2024-07-24 19:36:28.990400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7feb9800d7c0 is same with the state(6) to be set 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 [2024-07-24 19:36:28.991429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a3e0 is same with the state(6) to be set 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Write completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 Read completed with error (sct=0, sc=8) 00:07:11.623 [2024-07-24 19:36:28.991930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181a8f0 is same with the state(6) to be set 00:07:11.623 Initializing NVMe Controllers 00:07:11.623 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:11.623 Controller IO queue size 128, less than required. 00:07:11.623 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:11.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:11.623 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:11.623 Initialization complete. Launching workers. 00:07:11.623 ======================================================== 00:07:11.623 Latency(us) 00:07:11.623 Device Information : IOPS MiB/s Average min max 00:07:11.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.35 0.08 911576.08 680.20 1011529.55 00:07:11.623 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.31 0.08 917649.51 422.72 2002300.24 00:07:11.623 ======================================================== 00:07:11.623 Total : 332.66 0.16 914667.19 422.72 2002300.24 00:07:11.623 00:07:11.623 [2024-07-24 19:36:28.992373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181bac0 (9): Bad file descriptor 00:07:11.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:11.623 19:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:11.623 19:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:11.623 19:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1076854 00:07:11.623 19:36:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1076854 00:07:12.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1076854) - No such process 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1076854 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # local es=0 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # valid_exec_arg wait 1076854 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@639 -- # local arg=wait 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@643 -- # type -t wait 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # wait 1076854 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # es=1 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.189 [2024-07-24 19:36:29.515159] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1077262 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077262 00:07:12.189 19:36:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:12.189 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.447 [2024-07-24 19:36:29.578999] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:12.704 19:36:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:12.704 19:36:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077262 00:07:12.704 19:36:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:13.287 19:36:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:13.287 19:36:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077262 00:07:13.287 19:36:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:13.867 19:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:13.867 19:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077262 00:07:13.867 19:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:14.432 19:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:14.432 19:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077262 00:07:14.432 19:36:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:14.689 19:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:14.689 19:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077262 00:07:14.689 19:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:15.254 19:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:15.254 19:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077262 00:07:15.254 19:36:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:15.618 Initializing NVMe Controllers 00:07:15.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:15.618 Controller IO queue size 128, less than required. 00:07:15.618 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:15.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:15.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:15.618 Initialization complete. Launching workers. 00:07:15.618 ======================================================== 00:07:15.618 Latency(us) 00:07:15.618 Device Information : IOPS MiB/s Average min max 00:07:15.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003502.08 1000226.05 1014436.64 00:07:15.618 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005705.56 1000211.04 1042455.69 00:07:15.618 ======================================================== 00:07:15.618 Total : 256.00 0.12 1004603.82 1000211.04 1042455.69 00:07:15.618 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1077262 00:07:15.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1077262) - No such process 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1077262 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # nvmfcleanup 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:15.877 rmmod nvme_tcp 00:07:15.877 rmmod nvme_fabrics 00:07:15.877 rmmod nvme_keyring 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # '[' -n 1076821 ']' 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # killprocess 1076821 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' -z 1076821 ']' 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # kill -0 1076821 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # uname 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1076821 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1076821' 00:07:15.877 killing process with pid 1076821 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # kill 1076821 00:07:15.877 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@975 -- # wait 1076821 00:07:16.136 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:07:16.136 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:07:16.136 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:07:16.136 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:16.136 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@282 -- # remove_spdk_ns 00:07:16.136 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.136 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.136 19:36:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:07:18.664 00:07:18.664 real 0m12.216s 00:07:18.664 user 0m27.790s 00:07:18.664 sys 0m2.874s 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:18.664 ************************************ 00:07:18.664 END TEST nvmf_delete_subsystem 00:07:18.664 ************************************ 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.664 ************************************ 00:07:18.664 START TEST nvmf_host_management 00:07:18.664 ************************************ 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:18.664 * Looking for test storage... 00:07:18.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.664 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@452 -- # prepare_net_devs 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # local -g is_hw=no 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # remove_spdk_ns 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # xtrace_disable 00:07:18.665 19:36:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # pci_devs=() 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -a pci_devs 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # pci_net_devs=() 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # pci_drivers=() 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -A pci_drivers 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # net_devs=() 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # local -ga net_devs 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # e810=() 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # local -ga e810 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # x722=() 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # local -ga x722 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # mlx=() 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # local -ga mlx 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:20.565 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:20.565 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # [[ up == up ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:20.565 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # [[ up == up ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:20.565 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # is_hw=yes 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.565 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:07:20.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:07:20.566 00:07:20.566 --- 10.0.0.2 ping statistics --- 00:07:20.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.566 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:20.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:07:20.566 00:07:20.566 --- 10.0.0.1 ping statistics --- 00:07:20.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.566 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # return 0 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@725 -- # xtrace_disable 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@485 -- # nvmfpid=1079672 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@486 -- # waitforlisten 1079672 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@832 -- # '[' -z 1079672 ']' 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local max_retries=100 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@841 -- # xtrace_disable 00:07:20.566 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.566 [2024-07-24 19:36:37.683265] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:07:20.566 [2024-07-24 19:36:37.683344] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.566 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.566 [2024-07-24 19:36:37.749042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.566 [2024-07-24 19:36:37.858095] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.566 [2024-07-24 19:36:37.858152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.566 [2024-07-24 19:36:37.858165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.566 [2024-07-24 19:36:37.858177] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.566 [2024-07-24 19:36:37.858187] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.566 [2024-07-24 19:36:37.858269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.566 [2024-07-24 19:36:37.858335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.566 [2024-07-24 19:36:37.858373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:20.566 [2024-07-24 19:36:37.858378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.824 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:07:20.824 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@865 -- # return 0 00:07:20.824 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:07:20.824 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@731 -- # xtrace_disable 00:07:20.824 19:36:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.824 [2024-07-24 19:36:38.016828] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@725 -- # xtrace_disable 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.824 Malloc0 00:07:20.824 [2024-07-24 19:36:38.077923] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@731 -- # xtrace_disable 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1079772 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1079772 /var/tmp/bdevperf.sock 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@832 -- # '[' -z 1079772 ']' 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local max_retries=100 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@536 -- # config=() 00:07:20.824 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:20.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:20.825 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@536 -- # local subsystem config 00:07:20.825 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@841 -- # xtrace_disable 00:07:20.825 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:07:20.825 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:20.825 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:07:20.825 { 00:07:20.825 "params": { 00:07:20.825 "name": "Nvme$subsystem", 00:07:20.825 "trtype": "$TEST_TRANSPORT", 00:07:20.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:20.825 "adrfam": "ipv4", 00:07:20.825 "trsvcid": "$NVMF_PORT", 00:07:20.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:20.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:20.825 "hdgst": ${hdgst:-false}, 00:07:20.825 "ddgst": ${ddgst:-false} 00:07:20.825 }, 00:07:20.825 "method": "bdev_nvme_attach_controller" 00:07:20.825 } 00:07:20.825 EOF 00:07:20.825 )") 00:07:20.825 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # cat 00:07:20.825 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # jq . 00:07:20.825 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@561 -- # IFS=, 00:07:20.825 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:07:20.825 "params": { 00:07:20.825 "name": "Nvme0", 00:07:20.825 "trtype": "tcp", 00:07:20.825 "traddr": "10.0.0.2", 00:07:20.825 "adrfam": "ipv4", 00:07:20.825 "trsvcid": "4420", 00:07:20.825 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:20.825 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:20.825 "hdgst": false, 00:07:20.825 "ddgst": false 00:07:20.825 }, 00:07:20.825 "method": "bdev_nvme_attach_controller" 00:07:20.825 }' 00:07:20.825 [2024-07-24 19:36:38.158481] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:07:20.825 [2024-07-24 19:36:38.158582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1079772 ] 00:07:20.825 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.082 [2024-07-24 19:36:38.219198] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.082 [2024-07-24 19:36:38.328902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.340 Running I/O for 10 seconds... 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@865 -- # return 0 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:21.598 19:36:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:21.857 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:21.857 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:21.857 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:21.857 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:21.857 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:21.857 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.857 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:21.857 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:07:21.857 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:07:21.857 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:21.857 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:21.857 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:21.857 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:21.857 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:21.857 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.857 [2024-07-24 19:36:39.113569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.113622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.113650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.113666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.113691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.113706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.113722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.113735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.113759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.113772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.113788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.113802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.113818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.113832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.113863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.113877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.113893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.113906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.113921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.113935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.113950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.113963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.113978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.114003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.114019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.114032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.114047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.114060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.114075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.114088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.114104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.114117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.114131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.114144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.114160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.114174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.114191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.114205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.114235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.114259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.114277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.114300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.857 [2024-07-24 19:36:39.114315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.857 [2024-07-24 19:36:39.114329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.114978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.114993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.115006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.115020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.115035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.115050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.115063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.115078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.115091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.115106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.115119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.115134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.115147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.115165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.115180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.115195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.115214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.115229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.115267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.115288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.115309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.115325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.115338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.115353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.115367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.115382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.115396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.115411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.115424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.858 [2024-07-24 19:36:39.115439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.858 [2024-07-24 19:36:39.115453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.859 [2024-07-24 19:36:39.115468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.859 [2024-07-24 19:36:39.115482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.859 [2024-07-24 19:36:39.115496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.859 [2024-07-24 19:36:39.115510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.859 [2024-07-24 19:36:39.115525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.859 [2024-07-24 19:36:39.115539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.859 [2024-07-24 19:36:39.115576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.859 [2024-07-24 19:36:39.115594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.859 [2024-07-24 19:36:39.115609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.859 [2024-07-24 19:36:39.115628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.859 [2024-07-24 19:36:39.115642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:21.859 [2024-07-24 19:36:39.115655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.859 [2024-07-24 19:36:39.115744] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x285e5a0 was disconnected and freed. reset controller. 00:07:21.859 [2024-07-24 19:36:39.116903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:21.859 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:21.859 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:21.859 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@562 -- # xtrace_disable 00:07:21.859 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:21.859 task offset: 77568 on job bdev=Nvme0n1 fails 00:07:21.859 00:07:21.859 Latency(us) 00:07:21.859 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.859 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:21.859 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:21.859 Verification LBA range: start 0x0 length 0x400 00:07:21.859 Nvme0n1 : 0.40 1436.57 89.79 159.62 0.00 38949.12 2742.80 35535.08 00:07:21.859 =================================================================================================================== 00:07:21.859 Total : 1436.57 89.79 159.62 0.00 38949.12 2742.80 35535.08 00:07:21.859 [2024-07-24 19:36:39.118820] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:21.859 [2024-07-24 19:36:39.118849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244d790 (9): Bad file descriptor 00:07:21.859 [2024-07-24 19:36:39.121010] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:21.859 [2024-07-24 19:36:39.121230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:21.859 [2024-07-24 19:36:39.121269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:21.859 [2024-07-24 19:36:39.121311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:21.859 [2024-07-24 19:36:39.121327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:21.859 [2024-07-24 19:36:39.121340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:21.859 [2024-07-24 19:36:39.121353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x244d790 00:07:21.859 [2024-07-24 19:36:39.121386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244d790 (9): Bad file descriptor 00:07:21.859 [2024-07-24 19:36:39.121411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:21.859 [2024-07-24 19:36:39.121424] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:21.859 [2024-07-24 19:36:39.121445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:21.859 [2024-07-24 19:36:39.121465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:21.859 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:07:21.859 19:36:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:22.792 19:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1079772 00:07:22.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1079772) - No such process 00:07:22.792 19:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:22.792 19:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:22.792 19:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:22.792 19:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:22.792 19:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@536 -- # config=() 00:07:22.792 19:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@536 -- # local subsystem config 00:07:22.792 19:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:07:22.792 19:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:07:22.792 { 00:07:22.792 "params": { 00:07:22.792 "name": "Nvme$subsystem", 00:07:22.792 "trtype": "$TEST_TRANSPORT", 00:07:22.792 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:22.792 "adrfam": "ipv4", 00:07:22.792 "trsvcid": "$NVMF_PORT", 00:07:22.792 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:22.792 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:22.792 "hdgst": ${hdgst:-false}, 00:07:22.792 "ddgst": ${ddgst:-false} 00:07:22.792 }, 00:07:22.792 "method": "bdev_nvme_attach_controller" 00:07:22.792 } 00:07:22.792 EOF 00:07:22.792 )") 00:07:22.792 19:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # cat 00:07:22.792 19:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # jq . 00:07:22.792 19:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@561 -- # IFS=, 00:07:22.792 19:36:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:07:22.792 "params": { 00:07:22.792 "name": "Nvme0", 00:07:22.792 "trtype": "tcp", 00:07:22.792 "traddr": "10.0.0.2", 00:07:22.792 "adrfam": "ipv4", 00:07:22.792 "trsvcid": "4420", 00:07:22.792 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:22.792 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:22.793 "hdgst": false, 00:07:22.793 "ddgst": false 00:07:22.793 }, 00:07:22.793 "method": "bdev_nvme_attach_controller" 00:07:22.793 }' 00:07:22.793 [2024-07-24 19:36:40.171256] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:07:22.793 [2024-07-24 19:36:40.171340] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080053 ] 00:07:23.050 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.050 [2024-07-24 19:36:40.231345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.050 [2024-07-24 19:36:40.344700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.308 Running I/O for 1 seconds... 00:07:24.241 00:07:24.241 Latency(us) 00:07:24.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.241 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:24.241 Verification LBA range: start 0x0 length 0x400 00:07:24.241 Nvme0n1 : 1.01 1649.75 103.11 0.00 0.00 38022.99 5873.97 40195.41 00:07:24.241 =================================================================================================================== 00:07:24.241 Total : 1649.75 103.11 0.00 0.00 38022.99 5873.97 40195.41 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # nvmfcleanup 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:24.499 rmmod nvme_tcp 00:07:24.499 rmmod nvme_fabrics 00:07:24.499 rmmod nvme_keyring 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # '[' -n 1079672 ']' 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # killprocess 1079672 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' -z 1079672 ']' 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # kill -0 1079672 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # uname 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:07:24.499 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1079672 00:07:24.757 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:07:24.757 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:07:24.757 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1079672' 00:07:24.757 killing process with pid 1079672 00:07:24.757 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # kill 1079672 00:07:24.757 19:36:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@975 -- # wait 1079672 00:07:25.016 [2024-07-24 19:36:42.176566] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:25.016 19:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:07:25.016 19:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:07:25.016 19:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:07:25.016 19:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:25.016 19:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@282 -- # remove_spdk_ns 00:07:25.016 19:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.016 19:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.016 19:36:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:26.916 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:07:26.916 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:26.916 00:07:26.916 real 0m8.733s 00:07:26.916 user 0m20.407s 00:07:26.916 sys 0m2.517s 00:07:26.916 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:26.916 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:26.916 ************************************ 00:07:26.916 END TEST nvmf_host_management 00:07:26.916 ************************************ 00:07:26.916 19:36:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:26.916 19:36:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:07:26.916 19:36:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:26.916 19:36:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:27.174 ************************************ 00:07:27.174 START TEST nvmf_lvol 00:07:27.174 ************************************ 00:07:27.174 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:27.174 * Looking for test storage... 00:07:27.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.174 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.174 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:27.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@452 -- # prepare_net_devs 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # local -g is_hw=no 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # remove_spdk_ns 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # xtrace_disable 00:07:27.175 19:36:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # pci_devs=() 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -a pci_devs 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # pci_net_devs=() 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # pci_drivers=() 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -A pci_drivers 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # net_devs=() 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # local -ga net_devs 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # e810=() 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # local -ga e810 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # x722=() 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # local -ga x722 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # mlx=() 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # local -ga mlx 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:29.075 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:07:29.075 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:29.076 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # [[ up == up ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:29.076 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # [[ up == up ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:29.076 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # is_hw=yes 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:07:29.076 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:07:29.333 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.333 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:07:29.333 00:07:29.333 --- 10.0.0.2 ping statistics --- 00:07:29.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.333 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.333 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.333 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:07:29.333 00:07:29.333 --- 10.0.0.1 ping statistics --- 00:07:29.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.333 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # return 0 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@725 -- # xtrace_disable 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@485 -- # nvmfpid=1082137 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@486 -- # waitforlisten 1082137 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@832 -- # '[' -z 1082137 ']' 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local max_retries=100 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@841 -- # xtrace_disable 00:07:29.333 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.333 [2024-07-24 19:36:46.582842] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:07:29.333 [2024-07-24 19:36:46.582925] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.333 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.333 [2024-07-24 19:36:46.648720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.590 [2024-07-24 19:36:46.759305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:29.590 [2024-07-24 19:36:46.759367] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:29.590 [2024-07-24 19:36:46.759382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:29.590 [2024-07-24 19:36:46.759394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:29.590 [2024-07-24 19:36:46.759404] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:29.590 [2024-07-24 19:36:46.759454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.590 [2024-07-24 19:36:46.759512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.590 [2024-07-24 19:36:46.759515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.590 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:07:29.590 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@865 -- # return 0 00:07:29.590 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:07:29.590 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@731 -- # xtrace_disable 00:07:29.590 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:29.590 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:29.590 19:36:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:29.847 [2024-07-24 19:36:47.124035] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:29.847 19:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:30.105 19:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:30.105 19:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:30.362 19:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:30.363 19:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:30.619 19:36:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:30.876 19:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=016081b3-17cf-4bb5-87cc-3033159b9112 00:07:30.876 19:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 016081b3-17cf-4bb5-87cc-3033159b9112 lvol 20 00:07:31.133 19:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3dc57a99-c705-4acc-a1f7-9179eb7969ac 00:07:31.133 19:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:31.390 19:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3dc57a99-c705-4acc-a1f7-9179eb7969ac 00:07:31.696 19:36:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:31.953 [2024-07-24 19:36:49.181276] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:31.953 19:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.210 19:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1082560 00:07:32.210 19:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:32.210 19:36:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:32.210 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.141 19:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3dc57a99-c705-4acc-a1f7-9179eb7969ac MY_SNAPSHOT 00:07:33.399 19:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e435298b-e79b-470d-b107-857b385629c9 00:07:33.399 19:36:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3dc57a99-c705-4acc-a1f7-9179eb7969ac 30 00:07:33.656 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e435298b-e79b-470d-b107-857b385629c9 MY_CLONE 00:07:34.220 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=a0443ae0-15fa-4c4d-931e-7e25b2cff462 00:07:34.220 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a0443ae0-15fa-4c4d-931e-7e25b2cff462 00:07:34.785 19:36:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1082560 00:07:42.892 Initializing NVMe Controllers 00:07:42.892 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:42.892 Controller IO queue size 128, less than required. 00:07:42.892 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:42.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:42.892 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:42.892 Initialization complete. Launching workers. 00:07:42.892 ======================================================== 00:07:42.892 Latency(us) 00:07:42.893 Device Information : IOPS MiB/s Average min max 00:07:42.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10567.40 41.28 12112.98 1550.37 123099.86 00:07:42.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10345.70 40.41 12372.57 2425.11 52184.30 00:07:42.893 ======================================================== 00:07:42.893 Total : 20913.10 81.69 12241.40 1550.37 123099.86 00:07:42.893 00:07:42.893 19:36:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:42.893 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3dc57a99-c705-4acc-a1f7-9179eb7969ac 00:07:43.149 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 016081b3-17cf-4bb5-87cc-3033159b9112 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # nvmfcleanup 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:43.407 rmmod nvme_tcp 00:07:43.407 rmmod nvme_fabrics 00:07:43.407 rmmod nvme_keyring 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # '[' -n 1082137 ']' 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # killprocess 1082137 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' -z 1082137 ']' 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # kill -0 1082137 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # uname 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1082137 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1082137' 00:07:43.407 killing process with pid 1082137 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # kill 1082137 00:07:43.407 19:37:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@975 -- # wait 1082137 00:07:43.972 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:07:43.972 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:07:43.972 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:07:43.972 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:43.972 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@282 -- # remove_spdk_ns 00:07:43.972 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.972 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:43.972 19:37:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:07:45.872 00:07:45.872 real 0m18.802s 00:07:45.872 user 1m3.825s 00:07:45.872 sys 0m5.611s 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # xtrace_disable 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:45.872 ************************************ 00:07:45.872 END TEST nvmf_lvol 00:07:45.872 ************************************ 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:45.872 ************************************ 00:07:45.872 START TEST nvmf_lvs_grow 00:07:45.872 ************************************ 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:45.872 * Looking for test storage... 00:07:45.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.872 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:45.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@452 -- # prepare_net_devs 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # local -g is_hw=no 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # remove_spdk_ns 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # xtrace_disable 00:07:45.873 19:37:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # pci_devs=() 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -a pci_devs 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # pci_net_devs=() 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # pci_drivers=() 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -A pci_drivers 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # net_devs=() 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # local -ga net_devs 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # e810=() 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # local -ga e810 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # x722=() 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # local -ga x722 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # mlx=() 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # local -ga mlx 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:48.402 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:48.402 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # [[ up == up ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:48.402 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # [[ up == up ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:48.402 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # is_hw=yes 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:07:48.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:07:48.402 00:07:48.402 --- 10.0.0.2 ping statistics --- 00:07:48.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.402 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:07:48.402 00:07:48.402 --- 10.0.0.1 ping statistics --- 00:07:48.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.402 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # return 0 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@725 -- # xtrace_disable 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@485 -- # nvmfpid=1085837 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@486 -- # waitforlisten 1085837 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # '[' -z 1085837 ']' 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local max_retries=100 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@841 -- # xtrace_disable 00:07:48.402 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:48.402 [2024-07-24 19:37:05.539636] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:07:48.402 [2024-07-24 19:37:05.539730] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.402 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.402 [2024-07-24 19:37:05.609435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.402 [2024-07-24 19:37:05.729375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.402 [2024-07-24 19:37:05.729442] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.402 [2024-07-24 19:37:05.729458] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.402 [2024-07-24 19:37:05.729472] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.402 [2024-07-24 19:37:05.729483] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.402 [2024-07-24 19:37:05.729513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.661 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:07:48.661 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@865 -- # return 0 00:07:48.661 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:07:48.661 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@731 -- # xtrace_disable 00:07:48.661 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:48.661 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.661 19:37:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:48.919 [2024-07-24 19:37:06.104560] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.919 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:48.919 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:07:48.919 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1108 -- # xtrace_disable 00:07:48.919 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:48.919 ************************************ 00:07:48.919 START TEST lvs_grow_clean 00:07:48.919 ************************************ 00:07:48.919 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # lvs_grow 00:07:48.919 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:48.919 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:48.919 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:48.919 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:48.919 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:48.919 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:48.919 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:48.919 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:48.919 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:49.179 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:49.179 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:49.438 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5bf1c0ca-8621-4442-8b6b-3266b5e33324 00:07:49.438 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf1c0ca-8621-4442-8b6b-3266b5e33324 00:07:49.438 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:49.696 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:49.696 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:49.696 19:37:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5bf1c0ca-8621-4442-8b6b-3266b5e33324 lvol 150 00:07:49.953 19:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7f9670ab-c274-4303-a00d-ceeb5c0b20df 00:07:49.953 19:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:49.953 19:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:50.211 [2024-07-24 19:37:07.425561] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:50.211 [2024-07-24 19:37:07.425639] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:50.211 true 00:07:50.211 19:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf1c0ca-8621-4442-8b6b-3266b5e33324 00:07:50.211 19:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:50.469 19:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:50.470 19:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:50.728 19:37:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7f9670ab-c274-4303-a00d-ceeb5c0b20df 00:07:50.986 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:51.243 [2024-07-24 19:37:08.452802] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.243 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:51.502 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1086272 00:07:51.502 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:51.502 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:51.502 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1086272 /var/tmp/bdevperf.sock 00:07:51.502 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # '[' -z 1086272 ']' 00:07:51.502 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:51.502 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local max_retries=100 00:07:51.502 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:51.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:51.502 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@841 -- # xtrace_disable 00:07:51.502 19:37:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:51.502 [2024-07-24 19:37:08.757345] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:07:51.502 [2024-07-24 19:37:08.757421] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1086272 ] 00:07:51.502 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.502 [2024-07-24 19:37:08.819044] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.760 [2024-07-24 19:37:08.937056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.760 19:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:07:51.760 19:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@865 -- # return 0 00:07:51.760 19:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:52.326 Nvme0n1 00:07:52.326 19:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:52.584 [ 00:07:52.584 { 00:07:52.584 "name": "Nvme0n1", 00:07:52.584 "aliases": [ 00:07:52.584 "7f9670ab-c274-4303-a00d-ceeb5c0b20df" 00:07:52.584 ], 00:07:52.584 "product_name": "NVMe disk", 00:07:52.584 "block_size": 4096, 00:07:52.584 "num_blocks": 38912, 00:07:52.584 "uuid": "7f9670ab-c274-4303-a00d-ceeb5c0b20df", 00:07:52.584 "assigned_rate_limits": { 00:07:52.584 "rw_ios_per_sec": 0, 00:07:52.584 "rw_mbytes_per_sec": 0, 00:07:52.584 "r_mbytes_per_sec": 0, 00:07:52.584 "w_mbytes_per_sec": 0 00:07:52.584 }, 00:07:52.584 "claimed": false, 00:07:52.584 "zoned": false, 00:07:52.584 "supported_io_types": { 00:07:52.584 "read": true, 00:07:52.584 "write": true, 00:07:52.584 "unmap": true, 00:07:52.584 "flush": true, 00:07:52.584 "reset": true, 00:07:52.584 "nvme_admin": true, 00:07:52.584 "nvme_io": true, 00:07:52.584 "nvme_io_md": false, 00:07:52.584 "write_zeroes": true, 00:07:52.584 "zcopy": false, 00:07:52.584 "get_zone_info": false, 00:07:52.584 "zone_management": false, 00:07:52.584 "zone_append": false, 00:07:52.584 "compare": true, 00:07:52.584 "compare_and_write": true, 00:07:52.584 "abort": true, 00:07:52.584 "seek_hole": false, 00:07:52.584 "seek_data": false, 00:07:52.584 "copy": true, 00:07:52.584 "nvme_iov_md": false 00:07:52.584 }, 00:07:52.584 "memory_domains": [ 00:07:52.584 { 00:07:52.584 "dma_device_id": "system", 00:07:52.584 "dma_device_type": 1 00:07:52.584 } 00:07:52.584 ], 00:07:52.584 "driver_specific": { 00:07:52.584 "nvme": [ 00:07:52.584 { 00:07:52.584 "trid": { 00:07:52.584 "trtype": "TCP", 00:07:52.584 "adrfam": "IPv4", 00:07:52.584 "traddr": "10.0.0.2", 00:07:52.584 "trsvcid": "4420", 00:07:52.584 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:52.584 }, 00:07:52.584 "ctrlr_data": { 00:07:52.584 "cntlid": 1, 00:07:52.584 "vendor_id": "0x8086", 00:07:52.584 "model_number": "SPDK bdev Controller", 00:07:52.584 "serial_number": "SPDK0", 00:07:52.584 "firmware_revision": "24.09", 00:07:52.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:52.584 "oacs": { 00:07:52.584 "security": 0, 00:07:52.584 "format": 0, 00:07:52.584 "firmware": 0, 00:07:52.584 "ns_manage": 0 00:07:52.584 }, 00:07:52.584 "multi_ctrlr": true, 00:07:52.584 "ana_reporting": false 00:07:52.584 }, 00:07:52.584 "vs": { 00:07:52.584 "nvme_version": "1.3" 00:07:52.584 }, 00:07:52.584 "ns_data": { 00:07:52.584 "id": 1, 00:07:52.584 "can_share": true 00:07:52.584 } 00:07:52.584 } 00:07:52.584 ], 00:07:52.584 "mp_policy": "active_passive" 00:07:52.584 } 00:07:52.584 } 00:07:52.584 ] 00:07:52.584 19:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1086408 00:07:52.584 19:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:52.584 19:37:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:52.584 Running I/O for 10 seconds... 00:07:53.518 Latency(us) 00:07:53.518 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.518 Nvme0n1 : 1.00 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:07:53.518 =================================================================================================================== 00:07:53.518 Total : 13844.00 54.08 0.00 0.00 0.00 0.00 0.00 00:07:53.518 00:07:54.487 19:37:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5bf1c0ca-8621-4442-8b6b-3266b5e33324 00:07:54.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.487 Nvme0n1 : 2.00 13908.50 54.33 0.00 0.00 0.00 0.00 0.00 00:07:54.487 =================================================================================================================== 00:07:54.487 Total : 13908.50 54.33 0.00 0.00 0.00 0.00 0.00 00:07:54.487 00:07:54.745 true 00:07:54.745 19:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf1c0ca-8621-4442-8b6b-3266b5e33324 00:07:54.745 19:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:55.003 19:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:55.003 19:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:55.003 19:37:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1086408 00:07:55.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.569 Nvme0n1 : 3.00 14056.00 54.91 0.00 0.00 0.00 0.00 0.00 00:07:55.569 =================================================================================================================== 00:07:55.569 Total : 14056.00 54.91 0.00 0.00 0.00 0.00 0.00 00:07:55.569 00:07:56.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.517 Nvme0n1 : 4.00 14098.00 55.07 0.00 0.00 0.00 0.00 0.00 00:07:56.517 =================================================================================================================== 00:07:56.517 Total : 14098.00 55.07 0.00 0.00 0.00 0.00 0.00 00:07:56.517 00:07:57.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.890 Nvme0n1 : 5.00 14200.00 55.47 0.00 0.00 0.00 0.00 0.00 00:07:57.890 =================================================================================================================== 00:07:57.890 Total : 14200.00 55.47 0.00 0.00 0.00 0.00 0.00 00:07:57.890 00:07:58.821 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.821 Nvme0n1 : 6.00 14214.83 55.53 0.00 0.00 0.00 0.00 0.00 00:07:58.821 =================================================================================================================== 00:07:58.821 Total : 14214.83 55.53 0.00 0.00 0.00 0.00 0.00 00:07:58.821 00:07:59.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.753 Nvme0n1 : 7.00 14234.57 55.60 0.00 0.00 0.00 0.00 0.00 00:07:59.753 =================================================================================================================== 00:07:59.753 Total : 14234.57 55.60 0.00 0.00 0.00 0.00 0.00 00:07:59.753 00:08:00.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.685 Nvme0n1 : 8.00 14249.25 55.66 0.00 0.00 0.00 0.00 0.00 00:08:00.685 =================================================================================================================== 00:08:00.685 Total : 14249.25 55.66 0.00 0.00 0.00 0.00 0.00 00:08:00.685 00:08:01.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.617 Nvme0n1 : 9.00 14274.67 55.76 0.00 0.00 0.00 0.00 0.00 00:08:01.617 =================================================================================================================== 00:08:01.617 Total : 14274.67 55.76 0.00 0.00 0.00 0.00 0.00 00:08:01.617 00:08:02.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.549 Nvme0n1 : 10.00 14282.30 55.79 0.00 0.00 0.00 0.00 0.00 00:08:02.549 =================================================================================================================== 00:08:02.549 Total : 14282.30 55.79 0.00 0.00 0.00 0.00 0.00 00:08:02.549 00:08:02.549 00:08:02.549 Latency(us) 00:08:02.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.549 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.549 Nvme0n1 : 10.00 14289.30 55.82 0.00 0.00 8953.39 5606.97 18155.90 00:08:02.549 =================================================================================================================== 00:08:02.549 Total : 14289.30 55.82 0.00 0.00 8953.39 5606.97 18155.90 00:08:02.549 0 00:08:02.549 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1086272 00:08:02.549 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' -z 1086272 ']' 00:08:02.549 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # kill -0 1086272 00:08:02.549 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # uname 00:08:02.549 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:08:02.549 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1086272 00:08:02.549 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:08:02.549 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:08:02.549 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1086272' 00:08:02.549 killing process with pid 1086272 00:08:02.549 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # kill 1086272 00:08:02.549 Received shutdown signal, test time was about 10.000000 seconds 00:08:02.549 00:08:02.549 Latency(us) 00:08:02.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:02.549 =================================================================================================================== 00:08:02.549 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:02.549 19:37:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@975 -- # wait 1086272 00:08:03.113 19:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:03.370 19:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:03.627 19:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf1c0ca-8621-4442-8b6b-3266b5e33324 00:08:03.627 19:37:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:03.884 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:03.884 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:03.884 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:04.142 [2024-07-24 19:37:21.294053] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:04.142 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf1c0ca-8621-4442-8b6b-3266b5e33324 00:08:04.142 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # local es=0 00:08:04.142 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf1c0ca-8621-4442-8b6b-3266b5e33324 00:08:04.142 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@639 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.142 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:08:04.142 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.142 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:08:04.142 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.142 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:08:04.142 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.142 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:04.142 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf1c0ca-8621-4442-8b6b-3266b5e33324 00:08:04.400 request: 00:08:04.400 { 00:08:04.400 "uuid": "5bf1c0ca-8621-4442-8b6b-3266b5e33324", 00:08:04.400 "method": "bdev_lvol_get_lvstores", 00:08:04.400 "req_id": 1 00:08:04.400 } 00:08:04.400 Got JSON-RPC error response 00:08:04.400 response: 00:08:04.400 { 00:08:04.400 "code": -19, 00:08:04.400 "message": "No such device" 00:08:04.400 } 00:08:04.400 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # es=1 00:08:04.400 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:08:04.400 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:08:04.400 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:08:04.400 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:04.657 aio_bdev 00:08:04.657 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7f9670ab-c274-4303-a00d-ceeb5c0b20df 00:08:04.657 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_name=7f9670ab-c274-4303-a00d-ceeb5c0b20df 00:08:04.657 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:08:04.657 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local i 00:08:04.657 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:08:04.657 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:08:04.657 19:37:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:04.915 19:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7f9670ab-c274-4303-a00d-ceeb5c0b20df -t 2000 00:08:05.172 [ 00:08:05.172 { 00:08:05.172 "name": "7f9670ab-c274-4303-a00d-ceeb5c0b20df", 00:08:05.172 "aliases": [ 00:08:05.172 "lvs/lvol" 00:08:05.172 ], 00:08:05.172 "product_name": "Logical Volume", 00:08:05.172 "block_size": 4096, 00:08:05.172 "num_blocks": 38912, 00:08:05.172 "uuid": "7f9670ab-c274-4303-a00d-ceeb5c0b20df", 00:08:05.172 "assigned_rate_limits": { 00:08:05.172 "rw_ios_per_sec": 0, 00:08:05.172 "rw_mbytes_per_sec": 0, 00:08:05.172 "r_mbytes_per_sec": 0, 00:08:05.172 "w_mbytes_per_sec": 0 00:08:05.172 }, 00:08:05.172 "claimed": false, 00:08:05.172 "zoned": false, 00:08:05.172 "supported_io_types": { 00:08:05.172 "read": true, 00:08:05.172 "write": true, 00:08:05.172 "unmap": true, 00:08:05.172 "flush": false, 00:08:05.172 "reset": true, 00:08:05.172 "nvme_admin": false, 00:08:05.172 "nvme_io": false, 00:08:05.172 "nvme_io_md": false, 00:08:05.172 "write_zeroes": true, 00:08:05.172 "zcopy": false, 00:08:05.172 "get_zone_info": false, 00:08:05.172 "zone_management": false, 00:08:05.172 "zone_append": false, 00:08:05.172 "compare": false, 00:08:05.172 "compare_and_write": false, 00:08:05.172 "abort": false, 00:08:05.172 "seek_hole": true, 00:08:05.172 "seek_data": true, 00:08:05.172 "copy": false, 00:08:05.172 "nvme_iov_md": false 00:08:05.172 }, 00:08:05.172 "driver_specific": { 00:08:05.172 "lvol": { 00:08:05.172 "lvol_store_uuid": "5bf1c0ca-8621-4442-8b6b-3266b5e33324", 00:08:05.172 "base_bdev": "aio_bdev", 00:08:05.172 "thin_provision": false, 00:08:05.172 "num_allocated_clusters": 38, 00:08:05.172 "snapshot": false, 00:08:05.172 "clone": false, 00:08:05.172 "esnap_clone": false 00:08:05.172 } 00:08:05.172 } 00:08:05.172 } 00:08:05.172 ] 00:08:05.172 19:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # return 0 00:08:05.172 19:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf1c0ca-8621-4442-8b6b-3266b5e33324 00:08:05.172 19:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:05.429 19:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:05.429 19:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bf1c0ca-8621-4442-8b6b-3266b5e33324 00:08:05.429 19:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:05.687 19:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:05.687 19:37:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7f9670ab-c274-4303-a00d-ceeb5c0b20df 00:08:05.944 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5bf1c0ca-8621-4442-8b6b-3266b5e33324 00:08:06.200 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:06.458 00:08:06.458 real 0m17.634s 00:08:06.458 user 0m17.156s 00:08:06.458 sys 0m1.910s 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # xtrace_disable 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:06.458 ************************************ 00:08:06.458 END TEST lvs_grow_clean 00:08:06.458 ************************************ 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1108 -- # xtrace_disable 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:06.458 ************************************ 00:08:06.458 START TEST lvs_grow_dirty 00:08:06.458 ************************************ 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # lvs_grow dirty 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:06.458 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:06.716 19:37:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.973 19:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:06.973 19:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:07.231 19:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1e4eb353-b333-4adb-b162-0383de812dc1 00:08:07.231 19:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e4eb353-b333-4adb-b162-0383de812dc1 00:08:07.231 19:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:07.488 19:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:07.488 19:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:07.488 19:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1e4eb353-b333-4adb-b162-0383de812dc1 lvol 150 00:08:07.746 19:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=53eca3df-c7bf-41f3-a9ba-979754d702cb 00:08:07.746 19:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:07.746 19:37:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:08.004 [2024-07-24 19:37:25.152563] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:08.004 [2024-07-24 19:37:25.152640] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:08.004 true 00:08:08.004 19:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e4eb353-b333-4adb-b162-0383de812dc1 00:08:08.004 19:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:08.261 19:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:08.262 19:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:08.519 19:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 53eca3df-c7bf-41f3-a9ba-979754d702cb 00:08:08.777 19:37:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:09.035 [2024-07-24 19:37:26.259962] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:09.035 19:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:09.293 19:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1088459 00:08:09.293 19:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:09.293 19:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1088459 /var/tmp/bdevperf.sock 00:08:09.293 19:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # '[' -z 1088459 ']' 00:08:09.293 19:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:09.293 19:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:09.293 19:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:09.293 19:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:09.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:09.293 19:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:09.293 19:37:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.293 [2024-07-24 19:37:26.588009] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:08:09.293 [2024-07-24 19:37:26.588104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088459 ] 00:08:09.293 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.293 [2024-07-24 19:37:26.649170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.552 [2024-07-24 19:37:26.765359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.158 19:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:10.158 19:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@865 -- # return 0 00:08:10.158 19:37:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:10.724 Nvme0n1 00:08:10.724 19:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:10.982 [ 00:08:10.982 { 00:08:10.982 "name": "Nvme0n1", 00:08:10.982 "aliases": [ 00:08:10.982 "53eca3df-c7bf-41f3-a9ba-979754d702cb" 00:08:10.982 ], 00:08:10.982 "product_name": "NVMe disk", 00:08:10.982 "block_size": 4096, 00:08:10.982 "num_blocks": 38912, 00:08:10.982 "uuid": "53eca3df-c7bf-41f3-a9ba-979754d702cb", 00:08:10.982 "assigned_rate_limits": { 00:08:10.982 "rw_ios_per_sec": 0, 00:08:10.982 "rw_mbytes_per_sec": 0, 00:08:10.982 "r_mbytes_per_sec": 0, 00:08:10.982 "w_mbytes_per_sec": 0 00:08:10.982 }, 00:08:10.982 "claimed": false, 00:08:10.982 "zoned": false, 00:08:10.982 "supported_io_types": { 00:08:10.982 "read": true, 00:08:10.982 "write": true, 00:08:10.982 "unmap": true, 00:08:10.982 "flush": true, 00:08:10.982 "reset": true, 00:08:10.982 "nvme_admin": true, 00:08:10.982 "nvme_io": true, 00:08:10.982 "nvme_io_md": false, 00:08:10.982 "write_zeroes": true, 00:08:10.982 "zcopy": false, 00:08:10.982 "get_zone_info": false, 00:08:10.982 "zone_management": false, 00:08:10.982 "zone_append": false, 00:08:10.982 "compare": true, 00:08:10.982 "compare_and_write": true, 00:08:10.982 "abort": true, 00:08:10.982 "seek_hole": false, 00:08:10.982 "seek_data": false, 00:08:10.982 "copy": true, 00:08:10.982 "nvme_iov_md": false 00:08:10.982 }, 00:08:10.982 "memory_domains": [ 00:08:10.982 { 00:08:10.982 "dma_device_id": "system", 00:08:10.982 "dma_device_type": 1 00:08:10.982 } 00:08:10.982 ], 00:08:10.982 "driver_specific": { 00:08:10.982 "nvme": [ 00:08:10.982 { 00:08:10.982 "trid": { 00:08:10.982 "trtype": "TCP", 00:08:10.982 "adrfam": "IPv4", 00:08:10.982 "traddr": "10.0.0.2", 00:08:10.982 "trsvcid": "4420", 00:08:10.982 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:10.982 }, 00:08:10.982 "ctrlr_data": { 00:08:10.982 "cntlid": 1, 00:08:10.982 "vendor_id": "0x8086", 00:08:10.982 "model_number": "SPDK bdev Controller", 00:08:10.982 "serial_number": "SPDK0", 00:08:10.982 "firmware_revision": "24.09", 00:08:10.982 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:10.982 "oacs": { 00:08:10.982 "security": 0, 00:08:10.982 "format": 0, 00:08:10.982 "firmware": 0, 00:08:10.982 "ns_manage": 0 00:08:10.982 }, 00:08:10.982 "multi_ctrlr": true, 00:08:10.982 "ana_reporting": false 00:08:10.982 }, 00:08:10.982 "vs": { 00:08:10.982 "nvme_version": "1.3" 00:08:10.982 }, 00:08:10.982 "ns_data": { 00:08:10.982 "id": 1, 00:08:10.982 "can_share": true 00:08:10.982 } 00:08:10.982 } 00:08:10.982 ], 00:08:10.982 "mp_policy": "active_passive" 00:08:10.982 } 00:08:10.982 } 00:08:10.982 ] 00:08:10.982 19:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1088605 00:08:10.982 19:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:10.982 19:37:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:10.982 Running I/O for 10 seconds... 00:08:12.355 Latency(us) 00:08:12.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.356 Nvme0n1 : 1.00 14098.00 55.07 0.00 0.00 0.00 0.00 0.00 00:08:12.356 =================================================================================================================== 00:08:12.356 Total : 14098.00 55.07 0.00 0.00 0.00 0.00 0.00 00:08:12.356 00:08:12.921 19:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1e4eb353-b333-4adb-b162-0383de812dc1 00:08:13.179 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.179 Nvme0n1 : 2.00 14320.00 55.94 0.00 0.00 0.00 0.00 0.00 00:08:13.179 =================================================================================================================== 00:08:13.179 Total : 14320.00 55.94 0.00 0.00 0.00 0.00 0.00 00:08:13.179 00:08:13.179 true 00:08:13.179 19:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e4eb353-b333-4adb-b162-0383de812dc1 00:08:13.179 19:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:13.437 19:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:13.437 19:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:13.437 19:37:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1088605 00:08:14.003 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.003 Nvme0n1 : 3.00 14376.00 56.16 0.00 0.00 0.00 0.00 0.00 00:08:14.003 =================================================================================================================== 00:08:14.003 Total : 14376.00 56.16 0.00 0.00 0.00 0.00 0.00 00:08:14.003 00:08:15.377 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.377 Nvme0n1 : 4.00 14499.75 56.64 0.00 0.00 0.00 0.00 0.00 00:08:15.377 =================================================================================================================== 00:08:15.377 Total : 14499.75 56.64 0.00 0.00 0.00 0.00 0.00 00:08:15.377 00:08:16.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.310 Nvme0n1 : 5.00 14597.00 57.02 0.00 0.00 0.00 0.00 0.00 00:08:16.310 =================================================================================================================== 00:08:16.310 Total : 14597.00 57.02 0.00 0.00 0.00 0.00 0.00 00:08:16.310 00:08:17.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.243 Nvme0n1 : 6.00 14672.67 57.32 0.00 0.00 0.00 0.00 0.00 00:08:17.243 =================================================================================================================== 00:08:17.243 Total : 14672.67 57.32 0.00 0.00 0.00 0.00 0.00 00:08:17.243 00:08:18.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.176 Nvme0n1 : 7.00 14681.14 57.35 0.00 0.00 0.00 0.00 0.00 00:08:18.176 =================================================================================================================== 00:08:18.176 Total : 14681.14 57.35 0.00 0.00 0.00 0.00 0.00 00:08:18.176 00:08:19.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.109 Nvme0n1 : 8.00 14695.62 57.40 0.00 0.00 0.00 0.00 0.00 00:08:19.109 =================================================================================================================== 00:08:19.109 Total : 14695.62 57.40 0.00 0.00 0.00 0.00 0.00 00:08:19.109 00:08:20.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.042 Nvme0n1 : 9.00 14735.00 57.56 0.00 0.00 0.00 0.00 0.00 00:08:20.042 =================================================================================================================== 00:08:20.043 Total : 14735.00 57.56 0.00 0.00 0.00 0.00 0.00 00:08:20.043 00:08:21.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.416 Nvme0n1 : 10.00 14772.90 57.71 0.00 0.00 0.00 0.00 0.00 00:08:21.416 =================================================================================================================== 00:08:21.416 Total : 14772.90 57.71 0.00 0.00 0.00 0.00 0.00 00:08:21.416 00:08:21.416 00:08:21.416 Latency(us) 00:08:21.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.416 Nvme0n1 : 10.00 14778.56 57.73 0.00 0.00 8656.28 5097.24 16699.54 00:08:21.416 =================================================================================================================== 00:08:21.416 Total : 14778.56 57.73 0.00 0.00 8656.28 5097.24 16699.54 00:08:21.416 0 00:08:21.416 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1088459 00:08:21.416 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' -z 1088459 ']' 00:08:21.416 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # kill -0 1088459 00:08:21.416 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # uname 00:08:21.416 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:08:21.416 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1088459 00:08:21.416 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:08:21.416 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:08:21.416 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1088459' 00:08:21.416 killing process with pid 1088459 00:08:21.416 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # kill 1088459 00:08:21.416 Received shutdown signal, test time was about 10.000000 seconds 00:08:21.416 00:08:21.416 Latency(us) 00:08:21.416 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.416 =================================================================================================================== 00:08:21.416 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:21.416 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@975 -- # wait 1088459 00:08:21.416 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:21.674 19:37:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:21.931 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e4eb353-b333-4adb-b162-0383de812dc1 00:08:21.931 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:22.189 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:22.189 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:22.189 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1085837 00:08:22.190 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1085837 00:08:22.190 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1085837 Killed "${NVMF_APP[@]}" "$@" 00:08:22.190 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:22.190 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:22.190 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:08:22.190 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@725 -- # xtrace_disable 00:08:22.190 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.190 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@485 -- # nvmfpid=1089944 00:08:22.190 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:22.190 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@486 -- # waitforlisten 1089944 00:08:22.190 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # '[' -z 1089944 ']' 00:08:22.190 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.190 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:22.190 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.190 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:22.190 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.448 [2024-07-24 19:37:39.591888] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:08:22.448 [2024-07-24 19:37:39.591980] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.448 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.448 [2024-07-24 19:37:39.657194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.448 [2024-07-24 19:37:39.763470] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.448 [2024-07-24 19:37:39.763543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.448 [2024-07-24 19:37:39.763568] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.448 [2024-07-24 19:37:39.763593] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.448 [2024-07-24 19:37:39.763603] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.448 [2024-07-24 19:37:39.763627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.706 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:22.706 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@865 -- # return 0 00:08:22.706 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:08:22.706 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@731 -- # xtrace_disable 00:08:22.706 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.706 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.706 19:37:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.964 [2024-07-24 19:37:40.178177] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:22.964 [2024-07-24 19:37:40.178336] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:22.964 [2024-07-24 19:37:40.178384] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:22.964 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:22.964 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 53eca3df-c7bf-41f3-a9ba-979754d702cb 00:08:22.964 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_name=53eca3df-c7bf-41f3-a9ba-979754d702cb 00:08:22.964 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:08:22.964 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local i 00:08:22.964 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:08:22.964 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:08:22.964 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:23.222 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 53eca3df-c7bf-41f3-a9ba-979754d702cb -t 2000 00:08:23.480 [ 00:08:23.480 { 00:08:23.480 "name": "53eca3df-c7bf-41f3-a9ba-979754d702cb", 00:08:23.480 "aliases": [ 00:08:23.480 "lvs/lvol" 00:08:23.480 ], 00:08:23.480 "product_name": "Logical Volume", 00:08:23.480 "block_size": 4096, 00:08:23.480 "num_blocks": 38912, 00:08:23.480 "uuid": "53eca3df-c7bf-41f3-a9ba-979754d702cb", 00:08:23.480 "assigned_rate_limits": { 00:08:23.480 "rw_ios_per_sec": 0, 00:08:23.480 "rw_mbytes_per_sec": 0, 00:08:23.480 "r_mbytes_per_sec": 0, 00:08:23.480 "w_mbytes_per_sec": 0 00:08:23.480 }, 00:08:23.480 "claimed": false, 00:08:23.480 "zoned": false, 00:08:23.480 "supported_io_types": { 00:08:23.480 "read": true, 00:08:23.480 "write": true, 00:08:23.480 "unmap": true, 00:08:23.480 "flush": false, 00:08:23.480 "reset": true, 00:08:23.480 "nvme_admin": false, 00:08:23.480 "nvme_io": false, 00:08:23.480 "nvme_io_md": false, 00:08:23.480 "write_zeroes": true, 00:08:23.480 "zcopy": false, 00:08:23.480 "get_zone_info": false, 00:08:23.480 "zone_management": false, 00:08:23.480 "zone_append": false, 00:08:23.480 "compare": false, 00:08:23.480 "compare_and_write": false, 00:08:23.480 "abort": false, 00:08:23.480 "seek_hole": true, 00:08:23.480 "seek_data": true, 00:08:23.480 "copy": false, 00:08:23.480 "nvme_iov_md": false 00:08:23.480 }, 00:08:23.480 "driver_specific": { 00:08:23.480 "lvol": { 00:08:23.480 "lvol_store_uuid": "1e4eb353-b333-4adb-b162-0383de812dc1", 00:08:23.480 "base_bdev": "aio_bdev", 00:08:23.480 "thin_provision": false, 00:08:23.480 "num_allocated_clusters": 38, 00:08:23.480 "snapshot": false, 00:08:23.480 "clone": false, 00:08:23.480 "esnap_clone": false 00:08:23.480 } 00:08:23.480 } 00:08:23.480 } 00:08:23.480 ] 00:08:23.480 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # return 0 00:08:23.480 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e4eb353-b333-4adb-b162-0383de812dc1 00:08:23.480 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:23.738 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:23.738 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e4eb353-b333-4adb-b162-0383de812dc1 00:08:23.738 19:37:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:23.996 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:23.996 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:24.254 [2024-07-24 19:37:41.430971] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:24.254 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e4eb353-b333-4adb-b162-0383de812dc1 00:08:24.254 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # local es=0 00:08:24.254 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e4eb353-b333-4adb-b162-0383de812dc1 00:08:24.254 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@639 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.254 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:08:24.254 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.254 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:08:24.254 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.254 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:08:24.254 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.254 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:24.254 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e4eb353-b333-4adb-b162-0383de812dc1 00:08:24.512 request: 00:08:24.512 { 00:08:24.512 "uuid": "1e4eb353-b333-4adb-b162-0383de812dc1", 00:08:24.512 "method": "bdev_lvol_get_lvstores", 00:08:24.512 "req_id": 1 00:08:24.512 } 00:08:24.512 Got JSON-RPC error response 00:08:24.512 response: 00:08:24.512 { 00:08:24.512 "code": -19, 00:08:24.512 "message": "No such device" 00:08:24.512 } 00:08:24.512 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # es=1 00:08:24.512 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:08:24.512 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:08:24.512 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:08:24.512 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.770 aio_bdev 00:08:24.770 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 53eca3df-c7bf-41f3-a9ba-979754d702cb 00:08:24.770 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_name=53eca3df-c7bf-41f3-a9ba-979754d702cb 00:08:24.770 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:08:24.770 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local i 00:08:24.770 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:08:24.770 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:08:24.770 19:37:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:25.028 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 53eca3df-c7bf-41f3-a9ba-979754d702cb -t 2000 00:08:25.286 [ 00:08:25.286 { 00:08:25.286 "name": "53eca3df-c7bf-41f3-a9ba-979754d702cb", 00:08:25.286 "aliases": [ 00:08:25.286 "lvs/lvol" 00:08:25.286 ], 00:08:25.286 "product_name": "Logical Volume", 00:08:25.286 "block_size": 4096, 00:08:25.286 "num_blocks": 38912, 00:08:25.286 "uuid": "53eca3df-c7bf-41f3-a9ba-979754d702cb", 00:08:25.286 "assigned_rate_limits": { 00:08:25.286 "rw_ios_per_sec": 0, 00:08:25.286 "rw_mbytes_per_sec": 0, 00:08:25.286 "r_mbytes_per_sec": 0, 00:08:25.286 "w_mbytes_per_sec": 0 00:08:25.286 }, 00:08:25.286 "claimed": false, 00:08:25.286 "zoned": false, 00:08:25.286 "supported_io_types": { 00:08:25.286 "read": true, 00:08:25.286 "write": true, 00:08:25.286 "unmap": true, 00:08:25.286 "flush": false, 00:08:25.286 "reset": true, 00:08:25.286 "nvme_admin": false, 00:08:25.286 "nvme_io": false, 00:08:25.286 "nvme_io_md": false, 00:08:25.286 "write_zeroes": true, 00:08:25.286 "zcopy": false, 00:08:25.286 "get_zone_info": false, 00:08:25.286 "zone_management": false, 00:08:25.286 "zone_append": false, 00:08:25.286 "compare": false, 00:08:25.286 "compare_and_write": false, 00:08:25.286 "abort": false, 00:08:25.286 "seek_hole": true, 00:08:25.286 "seek_data": true, 00:08:25.286 "copy": false, 00:08:25.286 "nvme_iov_md": false 00:08:25.286 }, 00:08:25.286 "driver_specific": { 00:08:25.286 "lvol": { 00:08:25.286 "lvol_store_uuid": "1e4eb353-b333-4adb-b162-0383de812dc1", 00:08:25.286 "base_bdev": "aio_bdev", 00:08:25.286 "thin_provision": false, 00:08:25.286 "num_allocated_clusters": 38, 00:08:25.286 "snapshot": false, 00:08:25.286 "clone": false, 00:08:25.286 "esnap_clone": false 00:08:25.286 } 00:08:25.286 } 00:08:25.286 } 00:08:25.286 ] 00:08:25.286 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # return 0 00:08:25.286 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e4eb353-b333-4adb-b162-0383de812dc1 00:08:25.286 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:25.544 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:25.544 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1e4eb353-b333-4adb-b162-0383de812dc1 00:08:25.544 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:25.801 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:25.802 19:37:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 53eca3df-c7bf-41f3-a9ba-979754d702cb 00:08:26.059 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1e4eb353-b333-4adb-b162-0383de812dc1 00:08:26.316 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:26.574 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.574 00:08:26.574 real 0m19.983s 00:08:26.574 user 0m50.452s 00:08:26.574 sys 0m4.658s 00:08:26.574 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # xtrace_disable 00:08:26.574 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:26.574 ************************************ 00:08:26.574 END TEST lvs_grow_dirty 00:08:26.574 ************************************ 00:08:26.574 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:26.574 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # type=--id 00:08:26.574 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # id=0 00:08:26.574 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # '[' --id = --pid ']' 00:08:26.574 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:26.574 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # shm_files=nvmf_trace.0 00:08:26.574 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # [[ -z nvmf_trace.0 ]] 00:08:26.574 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # for n in $shm_files 00:08:26.574 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:26.574 nvmf_trace.0 00:08:26.574 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # return 0 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # nvmfcleanup 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:26.575 rmmod nvme_tcp 00:08:26.575 rmmod nvme_fabrics 00:08:26.575 rmmod nvme_keyring 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # '[' -n 1089944 ']' 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # killprocess 1089944 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' -z 1089944 ']' 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # kill -0 1089944 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # uname 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:08:26.575 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1089944 00:08:26.833 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:08:26.833 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:08:26.833 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1089944' 00:08:26.833 killing process with pid 1089944 00:08:26.833 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # kill 1089944 00:08:26.833 19:37:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@975 -- # wait 1089944 00:08:27.108 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:08:27.108 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:08:27.108 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:08:27.108 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.108 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@282 -- # remove_spdk_ns 00:08:27.108 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.108 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:27.108 19:37:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:08:29.015 00:08:29.015 real 0m43.131s 00:08:29.015 user 1m13.445s 00:08:29.015 sys 0m8.514s 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # xtrace_disable 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:29.015 ************************************ 00:08:29.015 END TEST nvmf_lvs_grow 00:08:29.015 ************************************ 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # xtrace_disable 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.015 ************************************ 00:08:29.015 START TEST nvmf_bdev_io_wait 00:08:29.015 ************************************ 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:29.015 * Looking for test storage... 00:08:29.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.015 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@452 -- # prepare_net_devs 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # local -g is_hw=no 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # remove_spdk_ns 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # xtrace_disable 00:08:29.274 19:37:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.173 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.173 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # pci_devs=() 00:08:31.173 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -a pci_devs 00:08:31.173 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # pci_net_devs=() 00:08:31.173 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:08:31.173 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # pci_drivers=() 00:08:31.173 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -A pci_drivers 00:08:31.173 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # net_devs=() 00:08:31.173 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # local -ga net_devs 00:08:31.173 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # e810=() 00:08:31.173 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # local -ga e810 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # x722=() 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # local -ga x722 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # mlx=() 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # local -ga mlx 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:31.174 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:31.174 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # [[ up == up ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:31.174 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # [[ up == up ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:31.174 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # is_hw=yes 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:08:31.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:08:31.174 00:08:31.174 --- 10.0.0.2 ping statistics --- 00:08:31.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.174 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:08:31.174 00:08:31.174 --- 10.0.0.1 ping statistics --- 00:08:31.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.174 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # return 0 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@725 -- # xtrace_disable 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.174 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # nvmfpid=1092463 00:08:31.175 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:31.175 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # waitforlisten 1092463 00:08:31.175 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # '[' -z 1092463 ']' 00:08:31.175 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.175 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:31.175 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.175 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:31.175 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.175 [2024-07-24 19:37:48.543148] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:08:31.175 [2024-07-24 19:37:48.543228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.432 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.432 [2024-07-24 19:37:48.611671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.432 [2024-07-24 19:37:48.730262] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.433 [2024-07-24 19:37:48.730330] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.433 [2024-07-24 19:37:48.730344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.433 [2024-07-24 19:37:48.730355] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.433 [2024-07-24 19:37:48.730369] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.433 [2024-07-24 19:37:48.730792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.433 [2024-07-24 19:37:48.730859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.433 [2024-07-24 19:37:48.730905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.433 [2024-07-24 19:37:48.730908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.433 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:31.433 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@865 -- # return 0 00:08:31.433 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:08:31.433 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@731 -- # xtrace_disable 00:08:31.433 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.691 [2024-07-24 19:37:48.896739] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.691 Malloc0 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.691 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:31.692 [2024-07-24 19:37:48.956533] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1092611 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1092613 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:08:31.692 { 00:08:31.692 "params": { 00:08:31.692 "name": "Nvme$subsystem", 00:08:31.692 "trtype": "$TEST_TRANSPORT", 00:08:31.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.692 "adrfam": "ipv4", 00:08:31.692 "trsvcid": "$NVMF_PORT", 00:08:31.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.692 "hdgst": ${hdgst:-false}, 00:08:31.692 "ddgst": ${ddgst:-false} 00:08:31.692 }, 00:08:31.692 "method": "bdev_nvme_attach_controller" 00:08:31.692 } 00:08:31.692 EOF 00:08:31.692 )") 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1092615 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:08:31.692 { 00:08:31.692 "params": { 00:08:31.692 "name": "Nvme$subsystem", 00:08:31.692 "trtype": "$TEST_TRANSPORT", 00:08:31.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.692 "adrfam": "ipv4", 00:08:31.692 "trsvcid": "$NVMF_PORT", 00:08:31.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.692 "hdgst": ${hdgst:-false}, 00:08:31.692 "ddgst": ${ddgst:-false} 00:08:31.692 }, 00:08:31.692 "method": "bdev_nvme_attach_controller" 00:08:31.692 } 00:08:31.692 EOF 00:08:31.692 )") 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1092618 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:08:31.692 { 00:08:31.692 "params": { 00:08:31.692 "name": "Nvme$subsystem", 00:08:31.692 "trtype": "$TEST_TRANSPORT", 00:08:31.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.692 "adrfam": "ipv4", 00:08:31.692 "trsvcid": "$NVMF_PORT", 00:08:31.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.692 "hdgst": ${hdgst:-false}, 00:08:31.692 "ddgst": ${ddgst:-false} 00:08:31.692 }, 00:08:31.692 "method": "bdev_nvme_attach_controller" 00:08:31.692 } 00:08:31.692 EOF 00:08:31.692 )") 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:08:31.692 { 00:08:31.692 "params": { 00:08:31.692 "name": "Nvme$subsystem", 00:08:31.692 "trtype": "$TEST_TRANSPORT", 00:08:31.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:31.692 "adrfam": "ipv4", 00:08:31.692 "trsvcid": "$NVMF_PORT", 00:08:31.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:31.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:31.692 "hdgst": ${hdgst:-false}, 00:08:31.692 "ddgst": ${ddgst:-false} 00:08:31.692 }, 00:08:31.692 "method": "bdev_nvme_attach_controller" 00:08:31.692 } 00:08:31.692 EOF 00:08:31.692 )") 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1092611 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:08:31.692 "params": { 00:08:31.692 "name": "Nvme1", 00:08:31.692 "trtype": "tcp", 00:08:31.692 "traddr": "10.0.0.2", 00:08:31.692 "adrfam": "ipv4", 00:08:31.692 "trsvcid": "4420", 00:08:31.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.692 "hdgst": false, 00:08:31.692 "ddgst": false 00:08:31.692 }, 00:08:31.692 "method": "bdev_nvme_attach_controller" 00:08:31.692 }' 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:08:31.692 "params": { 00:08:31.692 "name": "Nvme1", 00:08:31.692 "trtype": "tcp", 00:08:31.692 "traddr": "10.0.0.2", 00:08:31.692 "adrfam": "ipv4", 00:08:31.692 "trsvcid": "4420", 00:08:31.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.692 "hdgst": false, 00:08:31.692 "ddgst": false 00:08:31.692 }, 00:08:31.692 "method": "bdev_nvme_attach_controller" 00:08:31.692 }' 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:08:31.692 "params": { 00:08:31.692 "name": "Nvme1", 00:08:31.692 "trtype": "tcp", 00:08:31.692 "traddr": "10.0.0.2", 00:08:31.692 "adrfam": "ipv4", 00:08:31.692 "trsvcid": "4420", 00:08:31.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.692 "hdgst": false, 00:08:31.692 "ddgst": false 00:08:31.692 }, 00:08:31.692 "method": "bdev_nvme_attach_controller" 00:08:31.692 }' 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:08:31.692 19:37:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:08:31.692 "params": { 00:08:31.692 "name": "Nvme1", 00:08:31.692 "trtype": "tcp", 00:08:31.692 "traddr": "10.0.0.2", 00:08:31.692 "adrfam": "ipv4", 00:08:31.692 "trsvcid": "4420", 00:08:31.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:31.692 "hdgst": false, 00:08:31.692 "ddgst": false 00:08:31.692 }, 00:08:31.692 "method": "bdev_nvme_attach_controller" 00:08:31.692 }' 00:08:31.692 [2024-07-24 19:37:49.005395] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:08:31.692 [2024-07-24 19:37:49.005395] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:08:31.692 [2024-07-24 19:37:49.005421] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:08:31.693 [2024-07-24 19:37:49.005486] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 19:37:49.005486] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 19:37:49.005487] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:31.693 --proc-type=auto ] 00:08:31.693 --proc-type=auto ] 00:08:31.693 [2024-07-24 19:37:49.006015] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:08:31.693 [2024-07-24 19:37:49.006081] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:31.693 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.950 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.950 [2024-07-24 19:37:49.174669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.950 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.950 [2024-07-24 19:37:49.272008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:31.950 [2024-07-24 19:37:49.275668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.209 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.209 [2024-07-24 19:37:49.371398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:32.209 [2024-07-24 19:37:49.373259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.209 [2024-07-24 19:37:49.440891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.209 [2024-07-24 19:37:49.470921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:32.209 [2024-07-24 19:37:49.535942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:32.467 Running I/O for 1 seconds... 00:08:32.467 Running I/O for 1 seconds... 00:08:32.467 Running I/O for 1 seconds... 00:08:32.467 Running I/O for 1 seconds... 00:08:33.401 00:08:33.401 Latency(us) 00:08:33.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.401 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:33.401 Nvme1n1 : 1.01 9593.05 37.47 0.00 0.00 13274.64 9126.49 19612.25 00:08:33.401 =================================================================================================================== 00:08:33.401 Total : 9593.05 37.47 0.00 0.00 13274.64 9126.49 19612.25 00:08:33.401 00:08:33.401 Latency(us) 00:08:33.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.401 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:33.401 Nvme1n1 : 1.00 192744.59 752.91 0.00 0.00 661.45 279.13 958.77 00:08:33.401 =================================================================================================================== 00:08:33.401 Total : 192744.59 752.91 0.00 0.00 661.45 279.13 958.77 00:08:33.401 00:08:33.401 Latency(us) 00:08:33.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.401 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:33.401 Nvme1n1 : 1.01 8294.41 32.40 0.00 0.00 15359.59 7087.60 26408.58 00:08:33.401 =================================================================================================================== 00:08:33.401 Total : 8294.41 32.40 0.00 0.00 15359.59 7087.60 26408.58 00:08:33.658 00:08:33.658 Latency(us) 00:08:33.658 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:33.658 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:33.658 Nvme1n1 : 1.01 9536.73 37.25 0.00 0.00 13372.11 5971.06 24078.41 00:08:33.658 =================================================================================================================== 00:08:33.658 Total : 9536.73 37.25 0.00 0.00 13372.11 5971.06 24078.41 00:08:33.915 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1092613 00:08:33.915 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1092615 00:08:33.915 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1092618 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # nvmfcleanup 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:33.916 rmmod nvme_tcp 00:08:33.916 rmmod nvme_fabrics 00:08:33.916 rmmod nvme_keyring 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # '[' -n 1092463 ']' 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # killprocess 1092463 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' -z 1092463 ']' 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # kill -0 1092463 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # uname 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1092463 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1092463' 00:08:33.916 killing process with pid 1092463 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # kill 1092463 00:08:33.916 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@975 -- # wait 1092463 00:08:34.173 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:08:34.173 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:08:34.173 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:08:34.173 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:34.173 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@282 -- # remove_spdk_ns 00:08:34.173 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.173 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.173 19:37:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:08:36.699 00:08:36.699 real 0m7.249s 00:08:36.699 user 0m16.728s 00:08:36.699 sys 0m3.708s 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # xtrace_disable 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.699 ************************************ 00:08:36.699 END TEST nvmf_bdev_io_wait 00:08:36.699 ************************************ 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # xtrace_disable 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:36.699 ************************************ 00:08:36.699 START TEST nvmf_queue_depth 00:08:36.699 ************************************ 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:36.699 * Looking for test storage... 00:08:36.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:36.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:36.699 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@452 -- # prepare_net_devs 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # local -g is_hw=no 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # remove_spdk_ns 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # xtrace_disable 00:08:36.700 19:37:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.598 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.598 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # pci_devs=() 00:08:38.598 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -a pci_devs 00:08:38.598 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # pci_net_devs=() 00:08:38.598 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:08:38.598 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # pci_drivers=() 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -A pci_drivers 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # net_devs=() 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # local -ga net_devs 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # e810=() 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # local -ga e810 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # x722=() 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # local -ga x722 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # mlx=() 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # local -ga mlx 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:38.599 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:38.599 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # [[ up == up ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:38.599 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # [[ up == up ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:38.599 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # is_hw=yes 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:38.599 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:08:38.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:08:38.599 00:08:38.599 --- 10.0.0.2 ping statistics --- 00:08:38.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.599 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:38.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:08:38.600 00:08:38.600 --- 10.0.0.1 ping statistics --- 00:08:38.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.600 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # return 0 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@725 -- # xtrace_disable 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@485 -- # nvmfpid=1094839 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@486 -- # waitforlisten 1094839 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@832 -- # '[' -z 1094839 ']' 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:38.600 19:37:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.600 [2024-07-24 19:37:55.778719] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:08:38.600 [2024-07-24 19:37:55.778796] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:38.600 EAL: No free 2048 kB hugepages reported on node 1 00:08:38.600 [2024-07-24 19:37:55.842314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.600 [2024-07-24 19:37:55.952556] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:38.600 [2024-07-24 19:37:55.952628] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:38.600 [2024-07-24 19:37:55.952641] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:38.600 [2024-07-24 19:37:55.952667] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:38.600 [2024-07-24 19:37:55.952677] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:38.600 [2024-07-24 19:37:55.952703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@865 -- # return 0 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@731 -- # xtrace_disable 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.858 [2024-07-24 19:37:56.101872] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.858 Malloc0 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.858 [2024-07-24 19:37:56.162562] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1094864 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:38.858 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1094864 /var/tmp/bdevperf.sock 00:08:38.859 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@832 -- # '[' -z 1094864 ']' 00:08:38.859 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:38.859 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:38.859 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:38.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:38.859 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:38.859 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.859 [2024-07-24 19:37:56.208627] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:08:38.859 [2024-07-24 19:37:56.208702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094864 ] 00:08:39.117 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.117 [2024-07-24 19:37:56.272602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.117 [2024-07-24 19:37:56.389007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.375 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:39.375 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@865 -- # return 0 00:08:39.375 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:39.375 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:39.375 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.375 NVMe0n1 00:08:39.375 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:39.375 19:37:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:39.633 Running I/O for 10 seconds... 00:08:49.599 00:08:49.599 Latency(us) 00:08:49.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.599 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:49.599 Verification LBA range: start 0x0 length 0x4000 00:08:49.599 NVMe0n1 : 10.09 8304.21 32.44 0.00 0.00 122769.75 25631.86 75730.49 00:08:49.599 =================================================================================================================== 00:08:49.599 Total : 8304.21 32.44 0.00 0.00 122769.75 25631.86 75730.49 00:08:49.599 0 00:08:49.857 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1094864 00:08:49.857 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' -z 1094864 ']' 00:08:49.857 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # kill -0 1094864 00:08:49.857 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # uname 00:08:49.857 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:08:49.857 19:38:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1094864 00:08:49.857 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:08:49.857 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:08:49.857 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1094864' 00:08:49.857 killing process with pid 1094864 00:08:49.857 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # kill 1094864 00:08:49.857 Received shutdown signal, test time was about 10.000000 seconds 00:08:49.857 00:08:49.857 Latency(us) 00:08:49.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.857 =================================================================================================================== 00:08:49.857 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:49.857 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@975 -- # wait 1094864 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # nvmfcleanup 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:50.115 rmmod nvme_tcp 00:08:50.115 rmmod nvme_fabrics 00:08:50.115 rmmod nvme_keyring 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # '[' -n 1094839 ']' 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # killprocess 1094839 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' -z 1094839 ']' 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # kill -0 1094839 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # uname 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1094839 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1094839' 00:08:50.115 killing process with pid 1094839 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # kill 1094839 00:08:50.115 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@975 -- # wait 1094839 00:08:50.373 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:08:50.373 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:08:50.373 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:08:50.373 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.373 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@282 -- # remove_spdk_ns 00:08:50.373 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.373 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.373 19:38:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:08:52.937 00:08:52.937 real 0m16.096s 00:08:52.937 user 0m22.955s 00:08:52.937 sys 0m2.879s 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # xtrace_disable 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:52.937 ************************************ 00:08:52.937 END TEST nvmf_queue_depth 00:08:52.937 ************************************ 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # xtrace_disable 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.937 ************************************ 00:08:52.937 START TEST nvmf_target_multipath 00:08:52.937 ************************************ 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:52.937 * Looking for test storage... 00:08:52.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.937 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:52.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@452 -- # prepare_net_devs 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # local -g is_hw=no 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # remove_spdk_ns 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # xtrace_disable 00:08:52.938 19:38:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # pci_devs=() 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -a pci_devs 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # pci_net_devs=() 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # pci_drivers=() 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -A pci_drivers 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # net_devs=() 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # local -ga net_devs 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # e810=() 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # local -ga e810 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # x722=() 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # local -ga x722 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # mlx=() 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # local -ga mlx 00:08:54.846 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:54.847 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:54.847 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # [[ up == up ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:54.847 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # [[ up == up ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:54.847 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # is_hw=yes 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:08:54.847 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:08:54.848 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:08:54.848 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:54.848 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:54.848 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:54.848 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:08:54.848 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:54.848 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:54.848 19:38:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:08:54.848 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:54.848 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:08:54.848 00:08:54.848 --- 10.0.0.2 ping statistics --- 00:08:54.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.848 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:54.848 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:54.848 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:08:54.848 00:08:54.848 --- 10.0.0.1 ping statistics --- 00:08:54.848 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:54.848 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # return 0 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:54.848 only one NIC for nvmf test 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # nvmfcleanup 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.848 rmmod nvme_tcp 00:08:54.848 rmmod nvme_fabrics 00:08:54.848 rmmod nvme_keyring 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # '[' -n '' ']' 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@282 -- # remove_spdk_ns 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.848 19:38:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.750 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:08:56.750 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:56.750 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:56.750 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # nvmfcleanup 00:08:56.751 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # '[' -n '' ']' 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@282 -- # remove_spdk_ns 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:08:57.009 00:08:57.009 real 0m4.375s 00:08:57.009 user 0m0.809s 00:08:57.009 sys 0m1.552s 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # xtrace_disable 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:57.009 ************************************ 00:08:57.009 END TEST nvmf_target_multipath 00:08:57.009 ************************************ 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # xtrace_disable 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.009 ************************************ 00:08:57.009 START TEST nvmf_zcopy 00:08:57.009 ************************************ 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:57.009 * Looking for test storage... 00:08:57.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:57.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@452 -- # prepare_net_devs 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # local -g is_hw=no 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # remove_spdk_ns 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # xtrace_disable 00:08:57.009 19:38:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.906 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.906 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # pci_devs=() 00:08:58.906 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -a pci_devs 00:08:58.906 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # pci_net_devs=() 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # pci_drivers=() 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -A pci_drivers 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # net_devs=() 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # local -ga net_devs 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # e810=() 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # local -ga e810 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # x722=() 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # local -ga x722 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # mlx=() 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # local -ga mlx 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:58.907 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:58.907 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # [[ up == up ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:58.907 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # [[ up == up ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:58.907 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # is_hw=yes 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:08:58.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:58.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:08:58.907 00:08:58.907 --- 10.0.0.2 ping statistics --- 00:08:58.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.907 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:58.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:58.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:08:58.907 00:08:58.907 --- 10.0.0.1 ping statistics --- 00:08:58.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:58.907 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # return 0 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:08:58.907 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:08:58.908 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:58.908 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:08:58.908 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@725 -- # xtrace_disable 00:08:58.908 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.908 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@485 -- # nvmfpid=1100042 00:08:58.908 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:58.908 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@486 -- # waitforlisten 1100042 00:08:58.908 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@832 -- # '[' -z 1100042 ']' 00:08:58.908 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.908 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local max_retries=100 00:08:58.908 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.908 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@841 -- # xtrace_disable 00:08:58.908 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.166 [2024-07-24 19:38:16.293172] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:08:59.166 [2024-07-24 19:38:16.293270] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.166 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.166 [2024-07-24 19:38:16.358355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.166 [2024-07-24 19:38:16.475588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.166 [2024-07-24 19:38:16.475645] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.166 [2024-07-24 19:38:16.475662] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.166 [2024-07-24 19:38:16.475676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.166 [2024-07-24 19:38:16.475688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.166 [2024-07-24 19:38:16.475722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@865 -- # return 0 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@731 -- # xtrace_disable 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.424 [2024-07-24 19:38:16.627120] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.424 [2024-07-24 19:38:16.643377] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.424 malloc0 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@536 -- # config=() 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@536 -- # local subsystem config 00:08:59.424 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:08:59.425 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:08:59.425 { 00:08:59.425 "params": { 00:08:59.425 "name": "Nvme$subsystem", 00:08:59.425 "trtype": "$TEST_TRANSPORT", 00:08:59.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:59.425 "adrfam": "ipv4", 00:08:59.425 "trsvcid": "$NVMF_PORT", 00:08:59.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:59.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:59.425 "hdgst": ${hdgst:-false}, 00:08:59.425 "ddgst": ${ddgst:-false} 00:08:59.425 }, 00:08:59.425 "method": "bdev_nvme_attach_controller" 00:08:59.425 } 00:08:59.425 EOF 00:08:59.425 )") 00:08:59.425 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # cat 00:08:59.425 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # jq . 00:08:59.425 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@561 -- # IFS=, 00:08:59.425 19:38:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:08:59.425 "params": { 00:08:59.425 "name": "Nvme1", 00:08:59.425 "trtype": "tcp", 00:08:59.425 "traddr": "10.0.0.2", 00:08:59.425 "adrfam": "ipv4", 00:08:59.425 "trsvcid": "4420", 00:08:59.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:59.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:59.425 "hdgst": false, 00:08:59.425 "ddgst": false 00:08:59.425 }, 00:08:59.425 "method": "bdev_nvme_attach_controller" 00:08:59.425 }' 00:08:59.425 [2024-07-24 19:38:16.733336] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:08:59.425 [2024-07-24 19:38:16.733412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100073 ] 00:08:59.425 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.425 [2024-07-24 19:38:16.791897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.683 [2024-07-24 19:38:16.910981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.941 Running I/O for 10 seconds... 00:09:09.907 00:09:09.907 Latency(us) 00:09:09.907 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:09.907 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:09.907 Verification LBA range: start 0x0 length 0x1000 00:09:09.907 Nvme1n1 : 10.02 5627.47 43.96 0.00 0.00 22683.26 3689.43 31651.46 00:09:09.907 =================================================================================================================== 00:09:09.907 Total : 5627.47 43.96 0.00 0.00 22683.26 3689.43 31651.46 00:09:10.165 19:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1101275 00:09:10.165 19:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:10.165 19:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:10.165 19:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:10.165 19:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:10.165 19:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@536 -- # config=() 00:09:10.165 19:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@536 -- # local subsystem config 00:09:10.165 19:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:09:10.165 19:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:09:10.165 { 00:09:10.165 "params": { 00:09:10.165 "name": "Nvme$subsystem", 00:09:10.165 "trtype": "$TEST_TRANSPORT", 00:09:10.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.165 "adrfam": "ipv4", 00:09:10.165 "trsvcid": "$NVMF_PORT", 00:09:10.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.165 "hdgst": ${hdgst:-false}, 00:09:10.165 "ddgst": ${ddgst:-false} 00:09:10.165 }, 00:09:10.165 "method": "bdev_nvme_attach_controller" 00:09:10.165 } 00:09:10.165 EOF 00:09:10.165 )") 00:09:10.165 19:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # cat 00:09:10.165 [2024-07-24 19:38:27.468064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.165 [2024-07-24 19:38:27.468110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.165 19:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # jq . 00:09:10.165 19:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@561 -- # IFS=, 00:09:10.165 19:38:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:09:10.165 "params": { 00:09:10.165 "name": "Nvme1", 00:09:10.165 "trtype": "tcp", 00:09:10.165 "traddr": "10.0.0.2", 00:09:10.165 "adrfam": "ipv4", 00:09:10.165 "trsvcid": "4420", 00:09:10.165 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.165 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.165 "hdgst": false, 00:09:10.165 "ddgst": false 00:09:10.165 }, 00:09:10.165 "method": "bdev_nvme_attach_controller" 00:09:10.165 }' 00:09:10.165 [2024-07-24 19:38:27.476037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.165 [2024-07-24 19:38:27.476064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.165 [2024-07-24 19:38:27.484048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.165 [2024-07-24 19:38:27.484072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.165 [2024-07-24 19:38:27.492065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.165 [2024-07-24 19:38:27.492087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.165 [2024-07-24 19:38:27.500091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.165 [2024-07-24 19:38:27.500112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.165 [2024-07-24 19:38:27.506549] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:09:10.165 [2024-07-24 19:38:27.506623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1101275 ] 00:09:10.165 [2024-07-24 19:38:27.508112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.165 [2024-07-24 19:38:27.508132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.165 [2024-07-24 19:38:27.516134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.165 [2024-07-24 19:38:27.516154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.165 [2024-07-24 19:38:27.524156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.165 [2024-07-24 19:38:27.524176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.165 [2024-07-24 19:38:27.532178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.165 [2024-07-24 19:38:27.532197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.165 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.165 [2024-07-24 19:38:27.540203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.165 [2024-07-24 19:38:27.540239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.548238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.548298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.556269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.556326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.564308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.564330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.570895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.424 [2024-07-24 19:38:27.572329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.572350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.580380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.580415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.588380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.588406] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.596365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.596386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.604387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.604408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.612408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.612428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.620429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.620449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.628451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.628471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.636491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.636536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.644513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.644561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.652532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.652556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.660557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.660582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.668583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.668607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.676608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.676633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.684625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.684649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.690126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.424 [2024-07-24 19:38:27.692651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.692676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.700671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.700701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.708714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.708748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.716765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.716802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.724761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.724798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.732787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.732823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.740812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.740849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.748834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.748874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.424 [2024-07-24 19:38:27.756852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.424 [2024-07-24 19:38:27.756889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.425 [2024-07-24 19:38:27.764857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.425 [2024-07-24 19:38:27.764882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.425 [2024-07-24 19:38:27.772897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.425 [2024-07-24 19:38:27.772932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.425 [2024-07-24 19:38:27.780917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.425 [2024-07-24 19:38:27.780954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.425 [2024-07-24 19:38:27.788925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.425 [2024-07-24 19:38:27.788952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.425 [2024-07-24 19:38:27.796943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.425 [2024-07-24 19:38:27.796967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.804969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.804996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.812997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.813027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.821039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.821067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.829037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.829064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.837060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.837087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.845079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.845103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.853101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.853134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.861124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.861149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.869148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.869173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.877177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.877205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.885203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.885231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.893224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.893260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.901250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.901276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.909296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.909321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.917309] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.917331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 Running I/O for 5 seconds... 00:09:10.683 [2024-07-24 19:38:27.925337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.925359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.940794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.940826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.952464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.952493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.964334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.964363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.976047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.976077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.988020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.988052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:27.999488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:27.999533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:28.012505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:28.012549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:28.023674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:28.023705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:28.034892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:28.034922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:28.045987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:28.046016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.683 [2024-07-24 19:38:28.057705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.683 [2024-07-24 19:38:28.057735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.941 [2024-07-24 19:38:28.069340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.941 [2024-07-24 19:38:28.069367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.941 [2024-07-24 19:38:28.080954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.941 [2024-07-24 19:38:28.080989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.941 [2024-07-24 19:38:28.092764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.941 [2024-07-24 19:38:28.092795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.941 [2024-07-24 19:38:28.104122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.941 [2024-07-24 19:38:28.104152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.941 [2024-07-24 19:38:28.115860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.941 [2024-07-24 19:38:28.115890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.941 [2024-07-24 19:38:28.127456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.941 [2024-07-24 19:38:28.127484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.941 [2024-07-24 19:38:28.138872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.941 [2024-07-24 19:38:28.138903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.941 [2024-07-24 19:38:28.150373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.150401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.942 [2024-07-24 19:38:28.161874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.161905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.942 [2024-07-24 19:38:28.173353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.173379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.942 [2024-07-24 19:38:28.184299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.184326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.942 [2024-07-24 19:38:28.195790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.195820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.942 [2024-07-24 19:38:28.207440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.207467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.942 [2024-07-24 19:38:28.218983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.219012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.942 [2024-07-24 19:38:28.231059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.231089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.942 [2024-07-24 19:38:28.242817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.242847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.942 [2024-07-24 19:38:28.254702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.254732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.942 [2024-07-24 19:38:28.266862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.266893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.942 [2024-07-24 19:38:28.278146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.278173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.942 [2024-07-24 19:38:28.288856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.288883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.942 [2024-07-24 19:38:28.299221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.299258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.942 [2024-07-24 19:38:28.309843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.309870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.942 [2024-07-24 19:38:28.320470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.942 [2024-07-24 19:38:28.320497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.333629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.333657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.343962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.343988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.354773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.354800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.367366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.367393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.379854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.379881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.389339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.389366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.399982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.400009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.412808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.412835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.423277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.423304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.433503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.433530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.443513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.443540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.454039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.454065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.464765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.464798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.475208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.475235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.488042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.488069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.499700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.499727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.508620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.508647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.519942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.519969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.530800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.530827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.541568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.541595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.551658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.551685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.562238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.562274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.200 [2024-07-24 19:38:28.574802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.200 [2024-07-24 19:38:28.574829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.585742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.585772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.597306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.597334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.608715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.608746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.619955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.619986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.633267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.633311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.643946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.643976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.656401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.656428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.667444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.667471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.680730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.680773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.690554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.690584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.702810] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.702840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.714369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.714397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.725774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.725804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.737344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.737372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.749061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.749091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.760329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.760356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.771833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.771863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.783482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.783509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.794861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.794891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.806253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.806297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.817505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.817550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.460 [2024-07-24 19:38:28.828886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.460 [2024-07-24 19:38:28.828915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:28.840859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:28.840889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:28.852414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:28.852440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:28.865629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:28.865660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:28.876259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:28.876304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:28.887965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:28.887996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:28.901093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:28.901132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:28.911903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:28.911932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:28.923503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:28.923547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:28.934937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:28.934967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:28.948194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:28.948224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:28.958935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:28.958964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:28.970583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:28.970613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:28.981675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:28.981705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:28.993160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:28.993191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:29.004415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:29.004442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:29.015825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:29.015855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:29.027087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:29.027117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:29.038543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:29.038572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:29.050127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:29.050156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:29.061657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:29.061687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:29.072943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:29.072973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:29.084207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:29.084237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.721 [2024-07-24 19:38:29.095641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.721 [2024-07-24 19:38:29.095671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.109172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.109202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.118956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.118996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.130989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.131019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.142257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.142300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.153967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.153997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.166066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.166096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.177815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.177845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.189065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.189095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.200434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.200461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.212062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.212092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.223201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.223231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.234326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.234353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.245912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.245942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.257534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.257560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.269145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.269174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.283118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.283148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.294143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.294173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.305698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.305728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.317371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.317398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.328924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.980 [2024-07-24 19:38:29.328954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.980 [2024-07-24 19:38:29.340673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.981 [2024-07-24 19:38:29.340711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.981 [2024-07-24 19:38:29.354577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.981 [2024-07-24 19:38:29.354606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.365438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.365465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.376478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.376505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.387618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.387647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.399036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.399066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.410504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.410549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.421664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.421693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.433258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.433301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.444831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.444862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.456113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.456143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.467348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.467375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.478680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.478710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.491902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.491932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.503220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.503258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.514787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.514817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.526313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.526341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.539484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.539511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.549776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.549806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.561776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.561805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.573294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.573320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.584606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.584637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.595948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.595976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.608841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.608868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.240 [2024-07-24 19:38:29.619048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.240 [2024-07-24 19:38:29.619076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.498 [2024-07-24 19:38:29.629837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.498 [2024-07-24 19:38:29.629864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.498 [2024-07-24 19:38:29.642600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.498 [2024-07-24 19:38:29.642626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.498 [2024-07-24 19:38:29.652663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.498 [2024-07-24 19:38:29.652689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.498 [2024-07-24 19:38:29.663237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.498 [2024-07-24 19:38:29.663272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.498 [2024-07-24 19:38:29.673586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.498 [2024-07-24 19:38:29.673613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.498 [2024-07-24 19:38:29.684447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.498 [2024-07-24 19:38:29.684474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.498 [2024-07-24 19:38:29.697239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.498 [2024-07-24 19:38:29.697274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.498 [2024-07-24 19:38:29.706997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.498 [2024-07-24 19:38:29.707024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.498 [2024-07-24 19:38:29.718271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.498 [2024-07-24 19:38:29.718298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.498 [2024-07-24 19:38:29.730806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.498 [2024-07-24 19:38:29.730832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.498 [2024-07-24 19:38:29.740903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.498 [2024-07-24 19:38:29.740931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.499 [2024-07-24 19:38:29.751823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.499 [2024-07-24 19:38:29.751851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.499 [2024-07-24 19:38:29.764683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.499 [2024-07-24 19:38:29.764710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.499 [2024-07-24 19:38:29.775155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.499 [2024-07-24 19:38:29.775182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.499 [2024-07-24 19:38:29.786058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.499 [2024-07-24 19:38:29.786100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.499 [2024-07-24 19:38:29.798478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.499 [2024-07-24 19:38:29.798505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.499 [2024-07-24 19:38:29.810474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.499 [2024-07-24 19:38:29.810502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.499 [2024-07-24 19:38:29.819206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.499 [2024-07-24 19:38:29.819233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.499 [2024-07-24 19:38:29.830651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.499 [2024-07-24 19:38:29.830679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.499 [2024-07-24 19:38:29.841570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.499 [2024-07-24 19:38:29.841598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.499 [2024-07-24 19:38:29.852482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.499 [2024-07-24 19:38:29.852509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.499 [2024-07-24 19:38:29.862884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.499 [2024-07-24 19:38:29.862911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.499 [2024-07-24 19:38:29.873205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.499 [2024-07-24 19:38:29.873232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:29.883705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:29.883732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:29.895095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:29.895125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:29.906478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:29.906505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:29.917736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:29.917765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:29.929024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:29.929054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:29.940057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:29.940086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:29.952905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:29.952934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:29.963117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:29.963147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:29.975126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:29.975156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:29.986391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:29.986418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:29.997474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:29.997500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:30.008597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:30.008628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:30.021670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:30.021701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:30.032059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:30.032093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:30.044375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:30.044416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:30.055444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:30.055473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:30.068864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:30.068895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:30.079694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:30.079724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:30.091154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:30.091184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:30.102918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:30.102948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:30.114391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:30.114418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.756 [2024-07-24 19:38:30.128319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.756 [2024-07-24 19:38:30.128346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.139286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.139313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.150491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.150534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.161668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.161699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.173304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.173331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.185107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.185136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.198510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.198565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.209356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.209383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.220826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.220855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.233947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.233977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.244203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.244234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.256380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.256407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.267632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.267661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.280786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.280816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.291471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.291498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.303143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.303173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.314760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.314790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.328411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.328438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.339047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.339076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.350456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.350485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.363906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.363936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.374649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.374679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.014 [2024-07-24 19:38:30.385695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.014 [2024-07-24 19:38:30.385726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.399471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.399499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.410885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.410915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.422055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.422095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.433688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.433718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.445035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.445064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.456765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.456796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.468403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.468445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.480081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.480111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.493469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.493496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.504711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.504741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.516040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.516069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.527632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.527663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.538946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.538976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.550681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.550710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.562141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.562171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.573606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.573637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.584809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.584839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.596402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.596429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.608060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.608090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.621447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.621474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.631669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.631700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.272 [2024-07-24 19:38:30.643977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.272 [2024-07-24 19:38:30.644017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.529 [2024-07-24 19:38:30.655144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.529 [2024-07-24 19:38:30.655174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.529 [2024-07-24 19:38:30.666650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.529 [2024-07-24 19:38:30.666681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.529 [2024-07-24 19:38:30.677951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.529 [2024-07-24 19:38:30.677981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.529 [2024-07-24 19:38:30.689773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.529 [2024-07-24 19:38:30.689804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.529 [2024-07-24 19:38:30.701608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.529 [2024-07-24 19:38:30.701650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.529 [2024-07-24 19:38:30.712979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.529 [2024-07-24 19:38:30.713009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.529 [2024-07-24 19:38:30.726091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.529 [2024-07-24 19:38:30.726121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.529 [2024-07-24 19:38:30.736874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.529 [2024-07-24 19:38:30.736903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.529 [2024-07-24 19:38:30.748260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.529 [2024-07-24 19:38:30.748305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.529 [2024-07-24 19:38:30.759563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.529 [2024-07-24 19:38:30.759593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.529 [2024-07-24 19:38:30.771029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.529 [2024-07-24 19:38:30.771059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.529 [2024-07-24 19:38:30.782505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.529 [2024-07-24 19:38:30.782531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.530 [2024-07-24 19:38:30.793831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.530 [2024-07-24 19:38:30.793861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.530 [2024-07-24 19:38:30.805539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.530 [2024-07-24 19:38:30.805569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.530 [2024-07-24 19:38:30.817463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.530 [2024-07-24 19:38:30.817490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.530 [2024-07-24 19:38:30.831059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.530 [2024-07-24 19:38:30.831089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.530 [2024-07-24 19:38:30.841697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.530 [2024-07-24 19:38:30.841727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.530 [2024-07-24 19:38:30.852690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.530 [2024-07-24 19:38:30.852720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.530 [2024-07-24 19:38:30.865954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.530 [2024-07-24 19:38:30.865994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.530 [2024-07-24 19:38:30.876775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.530 [2024-07-24 19:38:30.876817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.530 [2024-07-24 19:38:30.888308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.530 [2024-07-24 19:38:30.888335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.530 [2024-07-24 19:38:30.899089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.530 [2024-07-24 19:38:30.899117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:30.909849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:30.909877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:30.922105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:30.922132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:30.931396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:30.931423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:30.941826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:30.941852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:30.952235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:30.952272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:30.962817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:30.962845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:30.973159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:30.973187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:30.983495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:30.983523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:30.994114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:30.994141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:31.004776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:31.004803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:31.015882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:31.015910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:31.026615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:31.026643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:31.037313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:31.037340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:31.048054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:31.048082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:31.060632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:31.060659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:31.070529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:31.070564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:31.081841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:31.081868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:31.094056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:31.094083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:31.103899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:31.103926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:31.114397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:31.114423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:31.124976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:31.125003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:31.137206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:31.137232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:31.147231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:31.147266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.788 [2024-07-24 19:38:31.157457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.788 [2024-07-24 19:38:31.157483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.046 [2024-07-24 19:38:31.167906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.046 [2024-07-24 19:38:31.167932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.046 [2024-07-24 19:38:31.180455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.046 [2024-07-24 19:38:31.180482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.046 [2024-07-24 19:38:31.190189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.046 [2024-07-24 19:38:31.190216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.046 [2024-07-24 19:38:31.200432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.046 [2024-07-24 19:38:31.200459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.046 [2024-07-24 19:38:31.212424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.046 [2024-07-24 19:38:31.212454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.046 [2024-07-24 19:38:31.223948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.046 [2024-07-24 19:38:31.223978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.046 [2024-07-24 19:38:31.235487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.046 [2024-07-24 19:38:31.235529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.046 [2024-07-24 19:38:31.246794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.046 [2024-07-24 19:38:31.246823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.047 [2024-07-24 19:38:31.258215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.047 [2024-07-24 19:38:31.258254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.047 [2024-07-24 19:38:31.269550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.047 [2024-07-24 19:38:31.269579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.047 [2024-07-24 19:38:31.281040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.047 [2024-07-24 19:38:31.281070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.047 [2024-07-24 19:38:31.292850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.047 [2024-07-24 19:38:31.292880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.047 [2024-07-24 19:38:31.304442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.047 [2024-07-24 19:38:31.304469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.047 [2024-07-24 19:38:31.315526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.047 [2024-07-24 19:38:31.315553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.047 [2024-07-24 19:38:31.327254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.047 [2024-07-24 19:38:31.327298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.047 [2024-07-24 19:38:31.338715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.047 [2024-07-24 19:38:31.338745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.047 [2024-07-24 19:38:31.349933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.047 [2024-07-24 19:38:31.349963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.047 [2024-07-24 19:38:31.361581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.047 [2024-07-24 19:38:31.361612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.047 [2024-07-24 19:38:31.374430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.047 [2024-07-24 19:38:31.374457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.047 [2024-07-24 19:38:31.384784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.047 [2024-07-24 19:38:31.384814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.047 [2024-07-24 19:38:31.396860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.047 [2024-07-24 19:38:31.396890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.047 [2024-07-24 19:38:31.408875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.047 [2024-07-24 19:38:31.408905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.047 [2024-07-24 19:38:31.420848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.047 [2024-07-24 19:38:31.420877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.432931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.432961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.446213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.446251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.457122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.457153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.468921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.468951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.480222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.480275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.491790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.491820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.503263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.503312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.514835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.514865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.526628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.526658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.537758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.537789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.549342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.549370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.560614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.560644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.574462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.574489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.585540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.585584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.597090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.597121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.608268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.608310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.619750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.619779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.630843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.630874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.642154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.642184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.653463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.653490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.664668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.664698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.305 [2024-07-24 19:38:31.677704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.305 [2024-07-24 19:38:31.677734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.688125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.688155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.699471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.699498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.710844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.710874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.722490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.722517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.733933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.733963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.747202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.747236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.758044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.758074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.769296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.769323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.780229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.780268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.791688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.791718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.803355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.803396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.814531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.814572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.825737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.825767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.837649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.837678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.849043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.849073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.860582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.860612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.872071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.872101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.883767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.883797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.896929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.896959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.907649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.907679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.919380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.919408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.563 [2024-07-24 19:38:31.930650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.563 [2024-07-24 19:38:31.930695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.821 [2024-07-24 19:38:31.942641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.821 [2024-07-24 19:38:31.942671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.821 [2024-07-24 19:38:31.954400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.821 [2024-07-24 19:38:31.954428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.821 [2024-07-24 19:38:31.967996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.821 [2024-07-24 19:38:31.968027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.821 [2024-07-24 19:38:31.979051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.821 [2024-07-24 19:38:31.979080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.821 [2024-07-24 19:38:31.990049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.821 [2024-07-24 19:38:31.990079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.821 [2024-07-24 19:38:32.001378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.821 [2024-07-24 19:38:32.001405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.821 [2024-07-24 19:38:32.013015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.821 [2024-07-24 19:38:32.013045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.821 [2024-07-24 19:38:32.024643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.821 [2024-07-24 19:38:32.024674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.821 [2024-07-24 19:38:32.035868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.821 [2024-07-24 19:38:32.035899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.821 [2024-07-24 19:38:32.047908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.821 [2024-07-24 19:38:32.047939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.821 [2024-07-24 19:38:32.059815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.821 [2024-07-24 19:38:32.059846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.821 [2024-07-24 19:38:32.070962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.821 [2024-07-24 19:38:32.070992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.822 [2024-07-24 19:38:32.082356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.822 [2024-07-24 19:38:32.082384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.822 [2024-07-24 19:38:32.093798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.822 [2024-07-24 19:38:32.093828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.822 [2024-07-24 19:38:32.105311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.822 [2024-07-24 19:38:32.105339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.822 [2024-07-24 19:38:32.116948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.822 [2024-07-24 19:38:32.116978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.822 [2024-07-24 19:38:32.128131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.822 [2024-07-24 19:38:32.128161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.822 [2024-07-24 19:38:32.139856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.822 [2024-07-24 19:38:32.139895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.822 [2024-07-24 19:38:32.151551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.822 [2024-07-24 19:38:32.151604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.822 [2024-07-24 19:38:32.162904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.822 [2024-07-24 19:38:32.162934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.822 [2024-07-24 19:38:32.174426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.822 [2024-07-24 19:38:32.174453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.822 [2024-07-24 19:38:32.185753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.822 [2024-07-24 19:38:32.185783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.822 [2024-07-24 19:38:32.196914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.822 [2024-07-24 19:38:32.196944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.081 [2024-07-24 19:38:32.208459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.208487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.219014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.219041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.229737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.229763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.242474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.242501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.254229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.254264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.263047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.263073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.274407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.274434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.286608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.286634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.296433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.296459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.306952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.306979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.317234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.317270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.327798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.327825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.340228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.340263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.349993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.350020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.360739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.360771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.371385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.371412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.381881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.381907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.392567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.392594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.405477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.405504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.415416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.415442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.426092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.426119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.443632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.443660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.082 [2024-07-24 19:38:32.455318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.082 [2024-07-24 19:38:32.455345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.465011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.465038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.476088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.476115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.488117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.488144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.498054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.498081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.508086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.508113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.518836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.518863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.530784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.530815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.542405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.542432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.554172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.554202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.567640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.567683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.578570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.578611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.590467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.590495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.602304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.602331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.616093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.616123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.626820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.626850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.638639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.638669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.650112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.650141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.663468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.663495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.674603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.674633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.686212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.686250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.698082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.698111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.341 [2024-07-24 19:38:32.711597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.341 [2024-07-24 19:38:32.711628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.722760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.722790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.734372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.734399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.745565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.745595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.757156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.757186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.768956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.768987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.780570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.780600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.792274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.792316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.803581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.803636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.814915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.814945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.826194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.826224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.837430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.837457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.849123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.849153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.860538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.860580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.872443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.872470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.884427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.884454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.896296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.896323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.907944] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.907973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.921778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.921808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.933019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.933049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 [2024-07-24 19:38:32.942520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.599 [2024-07-24 19:38:32.942564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.599 00:09:15.599 Latency(us) 00:09:15.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.599 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:15.599 Nvme1n1 : 5.01 11287.78 88.19 0.00 0.00 11324.37 4636.07 26408.58 00:09:15.599 =================================================================================================================== 00:09:15.600 Total : 11287.78 88.19 0.00 0.00 11324.37 4636.07 26408.58 00:09:15.600 [2024-07-24 19:38:32.949282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.600 [2024-07-24 19:38:32.949322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.600 [2024-07-24 19:38:32.957314] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.600 [2024-07-24 19:38:32.957339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.600 [2024-07-24 19:38:32.965325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.600 [2024-07-24 19:38:32.965346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.600 [2024-07-24 19:38:32.973370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.600 [2024-07-24 19:38:32.973416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.857 [2024-07-24 19:38:32.981394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.857 [2024-07-24 19:38:32.981439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.857 [2024-07-24 19:38:32.989423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.857 [2024-07-24 19:38:32.989470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.857 [2024-07-24 19:38:32.997435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.857 [2024-07-24 19:38:32.997481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.857 [2024-07-24 19:38:33.005451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.857 [2024-07-24 19:38:33.005497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.857 [2024-07-24 19:38:33.013479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.857 [2024-07-24 19:38:33.013526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.021498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.021541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.029518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.029562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.037549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.037596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.045566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.045611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.053591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.053637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.061607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.061646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.069632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.069678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.077656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.077700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.085677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.085721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.093686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.093712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.101690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.101714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.109713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.109738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.117733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.117757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.125797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.125830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.133805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.133846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.141831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.141875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.149825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.149850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.157845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.157868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.165866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.165892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.173888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.173912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.181917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.181948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.189969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.190014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.197977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.198013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.205975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.206000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.214006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.214030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 [2024-07-24 19:38:33.222031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.858 [2024-07-24 19:38:33.222056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.858 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1101275) - No such process 00:09:15.858 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1101275 00:09:15.858 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.858 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:09:15.858 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.157 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:09:16.157 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:16.157 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:09:16.157 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.157 delay0 00:09:16.157 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:09:16.157 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:16.157 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:09:16.157 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:16.157 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:09:16.157 19:38:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:16.157 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.157 [2024-07-24 19:38:33.386438] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:24.256 Initializing NVMe Controllers 00:09:24.256 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:24.256 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:24.256 Initialization complete. Launching workers. 00:09:24.256 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 236, failed: 21817 00:09:24.256 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 21931, failed to submit 122 00:09:24.256 success 21845, unsuccess 86, failed 0 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # nvmfcleanup 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:24.256 rmmod nvme_tcp 00:09:24.256 rmmod nvme_fabrics 00:09:24.256 rmmod nvme_keyring 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # '[' -n 1100042 ']' 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # killprocess 1100042 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' -z 1100042 ']' 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # kill -0 1100042 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # uname 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1100042 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1100042' 00:09:24.256 killing process with pid 1100042 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # kill 1100042 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@975 -- # wait 1100042 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@282 -- # remove_spdk_ns 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.256 19:38:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.633 19:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:09:25.633 00:09:25.633 real 0m28.722s 00:09:25.633 user 0m42.060s 00:09:25.633 sys 0m9.232s 00:09:25.633 19:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:25.633 19:38:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:25.633 ************************************ 00:09:25.633 END TEST nvmf_zcopy 00:09:25.633 ************************************ 00:09:25.633 19:38:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:25.633 19:38:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:09:25.633 19:38:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:25.633 19:38:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.633 ************************************ 00:09:25.633 START TEST nvmf_nmic 00:09:25.633 ************************************ 00:09:25.633 19:38:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:25.633 * Looking for test storage... 00:09:25.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@452 -- # prepare_net_devs 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # local -g is_hw=no 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # remove_spdk_ns 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # xtrace_disable 00:09:25.892 19:38:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # pci_devs=() 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -a pci_devs 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # pci_net_devs=() 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # pci_drivers=() 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -A pci_drivers 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # net_devs=() 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # local -ga net_devs 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # e810=() 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # local -ga e810 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # x722=() 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # local -ga x722 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # mlx=() 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # local -ga mlx 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.800 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:27.801 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:27.801 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # [[ up == up ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:27.801 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # [[ up == up ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:27.801 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # is_hw=yes 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.801 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.802 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.802 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:09:27.802 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:28.060 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:28.060 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:28.060 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:09:28.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:09:28.060 00:09:28.060 --- 10.0.0.2 ping statistics --- 00:09:28.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.060 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:09:28.060 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:28.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:09:28.060 00:09:28.060 --- 10.0.0.1 ping statistics --- 00:09:28.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.060 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:09:28.060 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.060 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # return 0 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@725 -- # xtrace_disable 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@485 -- # nvmfpid=1104790 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@486 -- # waitforlisten 1104790 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@832 -- # '[' -z 1104790 ']' 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:28.061 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.061 [2024-07-24 19:38:45.286793] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:09:28.061 [2024-07-24 19:38:45.286860] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.061 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.061 [2024-07-24 19:38:45.357000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:28.319 [2024-07-24 19:38:45.484240] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.319 [2024-07-24 19:38:45.484305] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.319 [2024-07-24 19:38:45.484322] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.319 [2024-07-24 19:38:45.484335] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.319 [2024-07-24 19:38:45.484347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.319 [2024-07-24 19:38:45.484402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.319 [2024-07-24 19:38:45.484435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:28.319 [2024-07-24 19:38:45.484488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:28.319 [2024-07-24 19:38:45.484492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.319 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:28.319 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@865 -- # return 0 00:09:28.319 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:09:28.319 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@731 -- # xtrace_disable 00:09:28.319 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.319 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.319 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:28.319 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:09:28.319 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.319 [2024-07-24 19:38:45.629387] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.319 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:09:28.319 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:28.319 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:09:28.319 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.319 Malloc0 00:09:28.319 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.320 [2024-07-24 19:38:45.681051] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:28.320 test case1: single bdev can't be used in multiple subsystems 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:09:28.320 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.577 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:09:28.577 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:28.577 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:28.577 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:09:28.578 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.578 [2024-07-24 19:38:45.704884] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:28.578 [2024-07-24 19:38:45.704913] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:28.578 [2024-07-24 19:38:45.704942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.578 request: 00:09:28.578 { 00:09:28.578 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:28.578 "namespace": { 00:09:28.578 "bdev_name": "Malloc0", 00:09:28.578 "no_auto_visible": false 00:09:28.578 }, 00:09:28.578 "method": "nvmf_subsystem_add_ns", 00:09:28.578 "req_id": 1 00:09:28.578 } 00:09:28.578 Got JSON-RPC error response 00:09:28.578 response: 00:09:28.578 { 00:09:28.578 "code": -32602, 00:09:28.578 "message": "Invalid parameters" 00:09:28.578 } 00:09:28.578 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:09:28.578 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:28.578 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:28.578 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:28.578 Adding namespace failed - expected result. 00:09:28.578 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:28.578 test case2: host connect to nvmf target in multiple paths 00:09:28.578 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:28.578 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:09:28.578 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:28.578 [2024-07-24 19:38:45.717017] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:28.578 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:09:28.578 19:38:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:29.143 19:38:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:29.708 19:38:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:29.708 19:38:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local i=0 00:09:29.708 19:38:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.708 19:38:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # [[ -n '' ]] 00:09:29.708 19:38:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # sleep 2 00:09:31.606 19:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:09:31.863 19:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:09:31.863 19:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:09:31.863 19:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # nvme_devices=1 00:09:31.863 19:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:09:31.863 19:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # return 0 00:09:31.863 19:38:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:31.863 [global] 00:09:31.863 thread=1 00:09:31.863 invalidate=1 00:09:31.863 rw=write 00:09:31.863 time_based=1 00:09:31.863 runtime=1 00:09:31.863 ioengine=libaio 00:09:31.863 direct=1 00:09:31.863 bs=4096 00:09:31.863 iodepth=1 00:09:31.863 norandommap=0 00:09:31.863 numjobs=1 00:09:31.863 00:09:31.863 verify_dump=1 00:09:31.863 verify_backlog=512 00:09:31.863 verify_state_save=0 00:09:31.863 do_verify=1 00:09:31.863 verify=crc32c-intel 00:09:31.863 [job0] 00:09:31.863 filename=/dev/nvme0n1 00:09:31.863 Could not set queue depth (nvme0n1) 00:09:31.863 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.863 fio-3.35 00:09:31.863 Starting 1 thread 00:09:33.236 00:09:33.236 job0: (groupid=0, jobs=1): err= 0: pid=1105423: Wed Jul 24 19:38:50 2024 00:09:33.236 read: IOPS=1999, BW=7996KiB/s (8188kB/s)(8004KiB/1001msec) 00:09:33.236 slat (nsec): min=4300, max=64559, avg=12480.96, stdev=7214.83 00:09:33.236 clat (usec): min=196, max=42119, avg=271.88, stdev=936.99 00:09:33.236 lat (usec): min=201, max=42127, avg=284.36, stdev=937.04 00:09:33.236 clat percentiles (usec): 00:09:33.236 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 227], 00:09:33.236 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 251], 00:09:33.236 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 289], 95.00th=[ 302], 00:09:33.236 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 832], 99.95th=[ 1549], 00:09:33.236 | 99.99th=[42206] 00:09:33.236 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:33.236 slat (usec): min=5, max=31580, avg=29.44, stdev=697.55 00:09:33.236 clat (usec): min=135, max=398, avg=173.29, stdev=30.47 00:09:33.236 lat (usec): min=143, max=31852, avg=202.73, stdev=700.54 00:09:33.236 clat percentiles (usec): 00:09:33.236 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:09:33.236 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:09:33.236 | 70.00th=[ 176], 80.00th=[ 188], 90.00th=[ 210], 95.00th=[ 227], 00:09:33.236 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 347], 99.95th=[ 359], 00:09:33.236 | 99.99th=[ 400] 00:09:33.236 bw ( KiB/s): min= 8968, max= 8968, per=100.00%, avg=8968.00, stdev= 0.00, samples=1 00:09:33.236 iops : min= 2242, max= 2242, avg=2242.00, stdev= 0.00, samples=1 00:09:33.236 lat (usec) : 250=78.74%, 500=21.17%, 750=0.02%, 1000=0.02% 00:09:33.236 lat (msec) : 2=0.02%, 50=0.02% 00:09:33.236 cpu : usr=3.10%, sys=6.30%, ctx=4051, majf=0, minf=2 00:09:33.236 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.236 issued rwts: total=2001,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.236 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.236 00:09:33.236 Run status group 0 (all jobs): 00:09:33.236 READ: bw=7996KiB/s (8188kB/s), 7996KiB/s-7996KiB/s (8188kB/s-8188kB/s), io=8004KiB (8196kB), run=1001-1001msec 00:09:33.237 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:09:33.237 00:09:33.237 Disk stats (read/write): 00:09:33.237 nvme0n1: ios=1814/2048, merge=0/0, ticks=1409/342, in_queue=1751, util=98.90% 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:33.237 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # local i=0 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # lsblk -o NAME,SERIAL 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME,SERIAL 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1228 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1232 -- # return 0 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # nvmfcleanup 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.237 rmmod nvme_tcp 00:09:33.237 rmmod nvme_fabrics 00:09:33.237 rmmod nvme_keyring 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # '[' -n 1104790 ']' 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # killprocess 1104790 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' -z 1104790 ']' 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # kill -0 1104790 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # uname 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1104790 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1104790' 00:09:33.237 killing process with pid 1104790 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # kill 1104790 00:09:33.237 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@975 -- # wait 1104790 00:09:33.496 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:09:33.496 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:09:33.496 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:09:33.496 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:33.496 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@282 -- # remove_spdk_ns 00:09:33.496 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.496 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:33.496 19:38:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.029 19:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:09:36.029 00:09:36.029 real 0m9.944s 00:09:36.029 user 0m22.146s 00:09:36.029 sys 0m2.440s 00:09:36.029 19:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:36.029 19:38:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:36.029 ************************************ 00:09:36.029 END TEST nvmf_nmic 00:09:36.029 ************************************ 00:09:36.029 19:38:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:36.029 19:38:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:09:36.029 19:38:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:36.029 19:38:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.029 ************************************ 00:09:36.029 START TEST nvmf_fio_target 00:09:36.029 ************************************ 00:09:36.029 19:38:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:36.029 * Looking for test storage... 00:09:36.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.029 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.029 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:36.029 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.029 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.029 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.029 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.029 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@452 -- # prepare_net_devs 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # local -g is_hw=no 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # remove_spdk_ns 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # xtrace_disable 00:09:36.030 19:38:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # pci_devs=() 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -a pci_devs 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # pci_net_devs=() 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # pci_drivers=() 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -A pci_drivers 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # net_devs=() 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # local -ga net_devs 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # e810=() 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # local -ga e810 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # x722=() 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # local -ga x722 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # mlx=() 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # local -ga mlx 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:37.929 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:37.929 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.929 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # [[ up == up ]] 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:37.930 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # [[ up == up ]] 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:37.930 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # is_hw=yes 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.930 19:38:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:09:37.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:09:37.930 00:09:37.930 --- 10.0.0.2 ping statistics --- 00:09:37.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.930 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:37.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:09:37.930 00:09:37.930 --- 10.0.0.1 ping statistics --- 00:09:37.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.930 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # return 0 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@725 -- # xtrace_disable 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@485 -- # nvmfpid=1107504 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@486 -- # waitforlisten 1107504 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@832 -- # '[' -z 1107504 ']' 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local max_retries=100 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@841 -- # xtrace_disable 00:09:37.930 19:38:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.930 [2024-07-24 19:38:55.203856] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:09:37.930 [2024-07-24 19:38:55.203942] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.930 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.930 [2024-07-24 19:38:55.272627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.189 [2024-07-24 19:38:55.395590] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.189 [2024-07-24 19:38:55.395669] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.189 [2024-07-24 19:38:55.395686] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.189 [2024-07-24 19:38:55.395699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.189 [2024-07-24 19:38:55.395710] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.189 [2024-07-24 19:38:55.395810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.189 [2024-07-24 19:38:55.395864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.189 [2024-07-24 19:38:55.395915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.189 [2024-07-24 19:38:55.395918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.122 19:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:09:39.122 19:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@865 -- # return 0 00:09:39.122 19:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:09:39.122 19:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@731 -- # xtrace_disable 00:09:39.122 19:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.122 19:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.122 19:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:39.122 [2024-07-24 19:38:56.394683] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.122 19:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:39.380 19:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:39.380 19:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:39.638 19:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:39.638 19:38:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:39.896 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:39.896 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.154 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:40.154 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:40.451 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.709 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:40.709 19:38:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:40.967 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:40.967 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:41.225 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:41.225 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:41.482 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:41.740 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:41.740 19:38:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:41.997 19:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:41.997 19:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:42.255 19:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.513 [2024-07-24 19:38:59.718654] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.513 19:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:42.771 19:38:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:43.029 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:43.595 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:43.595 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local i=0 00:09:43.595 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:09:43.595 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # [[ -n 4 ]] 00:09:43.595 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # nvme_device_counter=4 00:09:43.595 19:39:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # sleep 2 00:09:46.126 19:39:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:09:46.126 19:39:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:09:46.126 19:39:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:09:46.126 19:39:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # nvme_devices=4 00:09:46.126 19:39:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:09:46.126 19:39:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # return 0 00:09:46.126 19:39:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:46.126 [global] 00:09:46.126 thread=1 00:09:46.126 invalidate=1 00:09:46.126 rw=write 00:09:46.126 time_based=1 00:09:46.126 runtime=1 00:09:46.126 ioengine=libaio 00:09:46.126 direct=1 00:09:46.126 bs=4096 00:09:46.126 iodepth=1 00:09:46.126 norandommap=0 00:09:46.126 numjobs=1 00:09:46.126 00:09:46.126 verify_dump=1 00:09:46.126 verify_backlog=512 00:09:46.126 verify_state_save=0 00:09:46.126 do_verify=1 00:09:46.126 verify=crc32c-intel 00:09:46.126 [job0] 00:09:46.126 filename=/dev/nvme0n1 00:09:46.126 [job1] 00:09:46.126 filename=/dev/nvme0n2 00:09:46.126 [job2] 00:09:46.126 filename=/dev/nvme0n3 00:09:46.126 [job3] 00:09:46.126 filename=/dev/nvme0n4 00:09:46.126 Could not set queue depth (nvme0n1) 00:09:46.126 Could not set queue depth (nvme0n2) 00:09:46.126 Could not set queue depth (nvme0n3) 00:09:46.126 Could not set queue depth (nvme0n4) 00:09:46.126 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.126 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.126 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.126 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:46.126 fio-3.35 00:09:46.126 Starting 4 threads 00:09:47.060 00:09:47.060 job0: (groupid=0, jobs=1): err= 0: pid=1108698: Wed Jul 24 19:39:04 2024 00:09:47.060 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:09:47.060 slat (nsec): min=7947, max=35594, avg=20908.41, stdev=8341.49 00:09:47.060 clat (usec): min=40497, max=41962, avg=40998.30, stdev=239.64 00:09:47.060 lat (usec): min=40505, max=41980, avg=41019.20, stdev=240.16 00:09:47.060 clat percentiles (usec): 00:09:47.060 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:47.060 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:47.060 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:47.060 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:47.060 | 99.99th=[42206] 00:09:47.060 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:09:47.060 slat (nsec): min=7384, max=57772, avg=15775.80, stdev=6887.56 00:09:47.060 clat (usec): min=164, max=462, avg=218.07, stdev=37.48 00:09:47.060 lat (usec): min=172, max=491, avg=233.85, stdev=40.16 00:09:47.060 clat percentiles (usec): 00:09:47.060 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 194], 00:09:47.060 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 217], 00:09:47.060 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 249], 95.00th=[ 297], 00:09:47.060 | 99.00th=[ 359], 99.50th=[ 429], 99.90th=[ 461], 99.95th=[ 461], 00:09:47.060 | 99.99th=[ 461] 00:09:47.060 bw ( KiB/s): min= 4096, max= 4096, per=24.46%, avg=4096.00, stdev= 0.00, samples=1 00:09:47.060 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:47.060 lat (usec) : 250=86.33%, 500=9.55% 00:09:47.060 lat (msec) : 50=4.12% 00:09:47.060 cpu : usr=0.20%, sys=0.98%, ctx=535, majf=0, minf=1 00:09:47.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.060 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.060 job1: (groupid=0, jobs=1): err= 0: pid=1108700: Wed Jul 24 19:39:04 2024 00:09:47.060 read: IOPS=1517, BW=6069KiB/s (6215kB/s)(6148KiB/1013msec) 00:09:47.060 slat (nsec): min=5244, max=39531, avg=13348.18, stdev=5464.78 00:09:47.060 clat (usec): min=196, max=40699, avg=370.32, stdev=1034.87 00:09:47.060 lat (usec): min=203, max=40716, avg=383.67, stdev=1035.12 00:09:47.060 clat percentiles (usec): 00:09:47.060 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 249], 00:09:47.060 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 314], 60.00th=[ 343], 00:09:47.060 | 70.00th=[ 392], 80.00th=[ 453], 90.00th=[ 519], 95.00th=[ 545], 00:09:47.060 | 99.00th=[ 594], 99.50th=[ 603], 99.90th=[ 807], 99.95th=[40633], 00:09:47.060 | 99.99th=[40633] 00:09:47.060 write: IOPS=2021, BW=8087KiB/s (8281kB/s)(8192KiB/1013msec); 0 zone resets 00:09:47.060 slat (nsec): min=6860, max=52131, avg=15396.60, stdev=6808.77 00:09:47.060 clat (usec): min=134, max=316, avg=183.76, stdev=31.17 00:09:47.060 lat (usec): min=142, max=329, avg=199.15, stdev=33.83 00:09:47.060 clat percentiles (usec): 00:09:47.060 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 155], 00:09:47.060 | 30.00th=[ 163], 40.00th=[ 172], 50.00th=[ 178], 60.00th=[ 186], 00:09:47.060 | 70.00th=[ 196], 80.00th=[ 215], 90.00th=[ 231], 95.00th=[ 241], 00:09:47.060 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 306], 99.95th=[ 318], 00:09:47.060 | 99.99th=[ 318] 00:09:47.060 bw ( KiB/s): min= 8192, max= 8192, per=48.92%, avg=8192.00, stdev= 0.00, samples=2 00:09:47.060 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:09:47.060 lat (usec) : 250=64.91%, 500=29.96%, 750=5.08%, 1000=0.03% 00:09:47.060 lat (msec) : 50=0.03% 00:09:47.060 cpu : usr=2.77%, sys=5.34%, ctx=3588, majf=0, minf=1 00:09:47.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.060 issued rwts: total=1537,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.060 job2: (groupid=0, jobs=1): err= 0: pid=1108701: Wed Jul 24 19:39:04 2024 00:09:47.060 read: IOPS=28, BW=112KiB/s (115kB/s)(116KiB/1032msec) 00:09:47.060 slat (nsec): min=6796, max=32737, avg=18429.72, stdev=8560.38 00:09:47.060 clat (usec): min=250, max=42011, avg=31232.00, stdev=17767.75 00:09:47.060 lat (usec): min=265, max=42026, avg=31250.43, stdev=17771.43 00:09:47.060 clat percentiles (usec): 00:09:47.060 | 1.00th=[ 251], 5.00th=[ 260], 10.00th=[ 269], 20.00th=[ 297], 00:09:47.060 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:47.060 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:47.060 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:47.060 | 99.99th=[42206] 00:09:47.060 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:09:47.060 slat (nsec): min=7023, max=39967, avg=15207.33, stdev=6502.07 00:09:47.060 clat (usec): min=178, max=471, avg=225.13, stdev=35.28 00:09:47.060 lat (usec): min=191, max=491, avg=240.34, stdev=36.34 00:09:47.060 clat percentiles (usec): 00:09:47.060 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:09:47.060 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:09:47.060 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 269], 95.00th=[ 297], 00:09:47.060 | 99.00th=[ 347], 99.50th=[ 416], 99.90th=[ 474], 99.95th=[ 474], 00:09:47.060 | 99.99th=[ 474] 00:09:47.060 bw ( KiB/s): min= 4096, max= 4096, per=24.46%, avg=4096.00, stdev= 0.00, samples=1 00:09:47.060 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:47.060 lat (usec) : 250=81.70%, 500=14.23% 00:09:47.060 lat (msec) : 50=4.07% 00:09:47.060 cpu : usr=0.48%, sys=0.58%, ctx=541, majf=0, minf=2 00:09:47.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.060 issued rwts: total=29,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.060 job3: (groupid=0, jobs=1): err= 0: pid=1108702: Wed Jul 24 19:39:04 2024 00:09:47.060 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:47.060 slat (nsec): min=6147, max=20404, avg=8175.06, stdev=1757.24 00:09:47.060 clat (usec): min=248, max=42060, avg=693.37, stdev=3607.55 00:09:47.060 lat (usec): min=255, max=42073, avg=701.55, stdev=3607.88 00:09:47.060 clat percentiles (usec): 00:09:47.060 | 1.00th=[ 277], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 322], 00:09:47.060 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 355], 60.00th=[ 379], 00:09:47.060 | 70.00th=[ 396], 80.00th=[ 420], 90.00th=[ 490], 95.00th=[ 529], 00:09:47.060 | 99.00th=[ 586], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:47.060 | 99.99th=[42206] 00:09:47.060 write: IOPS=1246, BW=4987KiB/s (5107kB/s)(4992KiB/1001msec); 0 zone resets 00:09:47.060 slat (nsec): min=6802, max=73049, avg=9958.37, stdev=3175.24 00:09:47.060 clat (usec): min=162, max=490, avg=211.11, stdev=24.88 00:09:47.060 lat (usec): min=170, max=502, avg=221.07, stdev=26.02 00:09:47.060 clat percentiles (usec): 00:09:47.060 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 192], 00:09:47.060 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 215], 00:09:47.060 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 239], 95.00th=[ 251], 00:09:47.060 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 359], 99.95th=[ 490], 00:09:47.060 | 99.99th=[ 490] 00:09:47.060 bw ( KiB/s): min= 4096, max= 4096, per=24.46%, avg=4096.00, stdev= 0.00, samples=1 00:09:47.060 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:47.060 lat (usec) : 250=52.07%, 500=44.15%, 750=3.43% 00:09:47.060 lat (msec) : 50=0.35% 00:09:47.060 cpu : usr=1.00%, sys=2.20%, ctx=2273, majf=0, minf=1 00:09:47.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:47.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.060 issued rwts: total=1024,1248,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:47.060 00:09:47.060 Run status group 0 (all jobs): 00:09:47.060 READ: bw=9.89MiB/s (10.4MB/s), 85.9KiB/s-6069KiB/s (87.9kB/s-6215kB/s), io=10.2MiB (10.7MB), run=1001-1032msec 00:09:47.060 WRITE: bw=16.4MiB/s (17.1MB/s), 1984KiB/s-8087KiB/s (2032kB/s-8281kB/s), io=16.9MiB (17.7MB), run=1001-1032msec 00:09:47.060 00:09:47.060 Disk stats (read/write): 00:09:47.060 nvme0n1: ios=69/512, merge=0/0, ticks=1296/107, in_queue=1403, util=97.19% 00:09:47.060 nvme0n2: ios=1506/1536, merge=0/0, ticks=1488/264, in_queue=1752, util=97.45% 00:09:47.060 nvme0n3: ios=24/512, merge=0/0, ticks=700/108, in_queue=808, util=88.74% 00:09:47.060 nvme0n4: ios=753/1024, merge=0/0, ticks=596/214, in_queue=810, util=89.49% 00:09:47.060 19:39:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:47.060 [global] 00:09:47.060 thread=1 00:09:47.060 invalidate=1 00:09:47.060 rw=randwrite 00:09:47.060 time_based=1 00:09:47.060 runtime=1 00:09:47.060 ioengine=libaio 00:09:47.060 direct=1 00:09:47.060 bs=4096 00:09:47.060 iodepth=1 00:09:47.060 norandommap=0 00:09:47.061 numjobs=1 00:09:47.061 00:09:47.061 verify_dump=1 00:09:47.061 verify_backlog=512 00:09:47.061 verify_state_save=0 00:09:47.061 do_verify=1 00:09:47.061 verify=crc32c-intel 00:09:47.061 [job0] 00:09:47.061 filename=/dev/nvme0n1 00:09:47.318 [job1] 00:09:47.318 filename=/dev/nvme0n2 00:09:47.318 [job2] 00:09:47.318 filename=/dev/nvme0n3 00:09:47.318 [job3] 00:09:47.318 filename=/dev/nvme0n4 00:09:47.318 Could not set queue depth (nvme0n1) 00:09:47.318 Could not set queue depth (nvme0n2) 00:09:47.318 Could not set queue depth (nvme0n3) 00:09:47.318 Could not set queue depth (nvme0n4) 00:09:47.318 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.318 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.318 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.318 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:47.318 fio-3.35 00:09:47.318 Starting 4 threads 00:09:48.691 00:09:48.691 job0: (groupid=0, jobs=1): err= 0: pid=1108932: Wed Jul 24 19:39:05 2024 00:09:48.691 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:48.691 slat (nsec): min=4690, max=62977, avg=15472.88, stdev=9143.41 00:09:48.691 clat (usec): min=238, max=655, avg=378.19, stdev=79.35 00:09:48.691 lat (usec): min=244, max=672, avg=393.66, stdev=83.15 00:09:48.691 clat percentiles (usec): 00:09:48.691 | 1.00th=[ 247], 5.00th=[ 260], 10.00th=[ 273], 20.00th=[ 306], 00:09:48.691 | 30.00th=[ 326], 40.00th=[ 355], 50.00th=[ 371], 60.00th=[ 388], 00:09:48.691 | 70.00th=[ 424], 80.00th=[ 461], 90.00th=[ 486], 95.00th=[ 506], 00:09:48.691 | 99.00th=[ 578], 99.50th=[ 619], 99.90th=[ 644], 99.95th=[ 660], 00:09:48.691 | 99.99th=[ 660] 00:09:48.691 write: IOPS=1647, BW=6589KiB/s (6748kB/s)(6596KiB/1001msec); 0 zone resets 00:09:48.691 slat (nsec): min=5624, max=72223, avg=13842.14, stdev=7364.74 00:09:48.691 clat (usec): min=142, max=463, avg=217.30, stdev=59.84 00:09:48.691 lat (usec): min=150, max=494, avg=231.14, stdev=63.61 00:09:48.691 clat percentiles (usec): 00:09:48.691 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:09:48.691 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 204], 00:09:48.691 | 70.00th=[ 233], 80.00th=[ 258], 90.00th=[ 306], 95.00th=[ 347], 00:09:48.691 | 99.00th=[ 416], 99.50th=[ 424], 99.90th=[ 445], 99.95th=[ 465], 00:09:48.691 | 99.99th=[ 465] 00:09:48.691 bw ( KiB/s): min= 8192, max= 8192, per=35.68%, avg=8192.00, stdev= 0.00, samples=1 00:09:48.691 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:48.691 lat (usec) : 250=40.91%, 500=56.20%, 750=2.89% 00:09:48.691 cpu : usr=3.50%, sys=5.60%, ctx=3185, majf=0, minf=1 00:09:48.691 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.691 issued rwts: total=1536,1649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.691 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.691 job1: (groupid=0, jobs=1): err= 0: pid=1108933: Wed Jul 24 19:39:05 2024 00:09:48.691 read: IOPS=1764, BW=7057KiB/s (7226kB/s)(7064KiB/1001msec) 00:09:48.691 slat (nsec): min=4742, max=55356, avg=16228.34, stdev=10355.16 00:09:48.691 clat (usec): min=212, max=419, avg=291.50, stdev=41.30 00:09:48.691 lat (usec): min=217, max=461, avg=307.73, stdev=47.26 00:09:48.691 clat percentiles (usec): 00:09:48.691 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 253], 00:09:48.691 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 293], 00:09:48.691 | 70.00th=[ 314], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 363], 00:09:48.691 | 99.00th=[ 396], 99.50th=[ 404], 99.90th=[ 416], 99.95th=[ 420], 00:09:48.691 | 99.99th=[ 420] 00:09:48.691 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:48.691 slat (nsec): min=6124, max=69681, avg=14108.85, stdev=8416.38 00:09:48.691 clat (usec): min=151, max=451, avg=199.78, stdev=46.10 00:09:48.691 lat (usec): min=159, max=467, avg=213.89, stdev=50.16 00:09:48.691 clat percentiles (usec): 00:09:48.691 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:09:48.691 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 192], 00:09:48.691 | 70.00th=[ 198], 80.00th=[ 212], 90.00th=[ 245], 95.00th=[ 338], 00:09:48.691 | 99.00th=[ 363], 99.50th=[ 383], 99.90th=[ 412], 99.95th=[ 420], 00:09:48.691 | 99.99th=[ 453] 00:09:48.691 bw ( KiB/s): min= 8192, max= 8192, per=35.68%, avg=8192.00, stdev= 0.00, samples=1 00:09:48.691 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:48.691 lat (usec) : 250=56.74%, 500=43.26% 00:09:48.691 cpu : usr=3.30%, sys=6.00%, ctx=3817, majf=0, minf=1 00:09:48.691 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.691 issued rwts: total=1766,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.691 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.691 job2: (groupid=0, jobs=1): err= 0: pid=1108934: Wed Jul 24 19:39:05 2024 00:09:48.691 read: IOPS=1233, BW=4935KiB/s (5054kB/s)(4940KiB/1001msec) 00:09:48.691 slat (nsec): min=5964, max=74179, avg=17395.84, stdev=7869.31 00:09:48.691 clat (usec): min=232, max=40926, avg=436.63, stdev=1158.06 00:09:48.691 lat (usec): min=240, max=40933, avg=454.03, stdev=1157.97 00:09:48.691 clat percentiles (usec): 00:09:48.691 | 1.00th=[ 255], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 306], 00:09:48.691 | 30.00th=[ 318], 40.00th=[ 359], 50.00th=[ 375], 60.00th=[ 404], 00:09:48.691 | 70.00th=[ 453], 80.00th=[ 515], 90.00th=[ 553], 95.00th=[ 594], 00:09:48.691 | 99.00th=[ 701], 99.50th=[ 758], 99.90th=[ 816], 99.95th=[41157], 00:09:48.691 | 99.99th=[41157] 00:09:48.691 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:48.691 slat (nsec): min=8130, max=68515, avg=17390.82, stdev=6238.19 00:09:48.691 clat (usec): min=172, max=1286, avg=259.06, stdev=61.12 00:09:48.691 lat (usec): min=184, max=1308, avg=276.45, stdev=58.95 00:09:48.691 clat percentiles (usec): 00:09:48.691 | 1.00th=[ 184], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 215], 00:09:48.691 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 241], 60.00th=[ 251], 00:09:48.691 | 70.00th=[ 273], 80.00th=[ 306], 90.00th=[ 338], 95.00th=[ 375], 00:09:48.691 | 99.00th=[ 412], 99.50th=[ 445], 99.90th=[ 482], 99.95th=[ 1287], 00:09:48.691 | 99.99th=[ 1287] 00:09:48.691 bw ( KiB/s): min= 5360, max= 5360, per=23.35%, avg=5360.00, stdev= 0.00, samples=1 00:09:48.691 iops : min= 1340, max= 1340, avg=1340.00, stdev= 0.00, samples=1 00:09:48.691 lat (usec) : 250=32.70%, 500=56.87%, 750=10.14%, 1000=0.22% 00:09:48.691 lat (msec) : 2=0.04%, 50=0.04% 00:09:48.691 cpu : usr=3.60%, sys=6.40%, ctx=2772, majf=0, minf=2 00:09:48.691 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.691 issued rwts: total=1235,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.691 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.691 job3: (groupid=0, jobs=1): err= 0: pid=1108935: Wed Jul 24 19:39:05 2024 00:09:48.691 read: IOPS=22, BW=91.9KiB/s (94.1kB/s)(92.0KiB/1001msec) 00:09:48.691 slat (nsec): min=8522, max=36292, avg=21488.09, stdev=9190.26 00:09:48.691 clat (usec): min=315, max=41992, avg=36223.59, stdev=13161.39 00:09:48.691 lat (usec): min=332, max=42027, avg=36245.08, stdev=13164.73 00:09:48.691 clat percentiles (usec): 00:09:48.691 | 1.00th=[ 318], 5.00th=[ 445], 10.00th=[ 8848], 20.00th=[41157], 00:09:48.691 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:48.691 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:48.691 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:48.691 | 99.99th=[42206] 00:09:48.691 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:48.691 slat (nsec): min=7883, max=30784, avg=12409.42, stdev=3581.13 00:09:48.691 clat (usec): min=216, max=418, avg=307.63, stdev=42.86 00:09:48.691 lat (usec): min=231, max=428, avg=320.04, stdev=41.24 00:09:48.691 clat percentiles (usec): 00:09:48.691 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 265], 00:09:48.691 | 30.00th=[ 285], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 318], 00:09:48.691 | 70.00th=[ 330], 80.00th=[ 338], 90.00th=[ 363], 95.00th=[ 396], 00:09:48.691 | 99.00th=[ 404], 99.50th=[ 408], 99.90th=[ 420], 99.95th=[ 420], 00:09:48.691 | 99.99th=[ 420] 00:09:48.691 bw ( KiB/s): min= 4096, max= 4096, per=17.84%, avg=4096.00, stdev= 0.00, samples=1 00:09:48.691 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:48.691 lat (usec) : 250=10.28%, 500=85.79% 00:09:48.691 lat (msec) : 10=0.19%, 50=3.74% 00:09:48.691 cpu : usr=0.40%, sys=0.90%, ctx=536, majf=0, minf=1 00:09:48.691 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:48.691 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.691 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:48.691 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:48.691 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:48.691 00:09:48.691 Run status group 0 (all jobs): 00:09:48.691 READ: bw=17.8MiB/s (18.7MB/s), 91.9KiB/s-7057KiB/s (94.1kB/s-7226kB/s), io=17.8MiB (18.7MB), run=1001-1001msec 00:09:48.691 WRITE: bw=22.4MiB/s (23.5MB/s), 2046KiB/s-8184KiB/s (2095kB/s-8380kB/s), io=22.4MiB (23.5MB), run=1001-1001msec 00:09:48.691 00:09:48.691 Disk stats (read/write): 00:09:48.691 nvme0n1: ios=1333/1536, merge=0/0, ticks=487/304, in_queue=791, util=87.27% 00:09:48.691 nvme0n2: ios=1573/1749, merge=0/0, ticks=767/341, in_queue=1108, util=97.36% 00:09:48.691 nvme0n3: ios=1073/1343, merge=0/0, ticks=776/346, in_queue=1122, util=94.99% 00:09:48.691 nvme0n4: ios=43/512, merge=0/0, ticks=1619/152, in_queue=1771, util=98.32% 00:09:48.691 19:39:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:48.691 [global] 00:09:48.691 thread=1 00:09:48.691 invalidate=1 00:09:48.692 rw=write 00:09:48.692 time_based=1 00:09:48.692 runtime=1 00:09:48.692 ioengine=libaio 00:09:48.692 direct=1 00:09:48.692 bs=4096 00:09:48.692 iodepth=128 00:09:48.692 norandommap=0 00:09:48.692 numjobs=1 00:09:48.692 00:09:48.692 verify_dump=1 00:09:48.692 verify_backlog=512 00:09:48.692 verify_state_save=0 00:09:48.692 do_verify=1 00:09:48.692 verify=crc32c-intel 00:09:48.692 [job0] 00:09:48.692 filename=/dev/nvme0n1 00:09:48.692 [job1] 00:09:48.692 filename=/dev/nvme0n2 00:09:48.692 [job2] 00:09:48.692 filename=/dev/nvme0n3 00:09:48.692 [job3] 00:09:48.692 filename=/dev/nvme0n4 00:09:48.692 Could not set queue depth (nvme0n1) 00:09:48.692 Could not set queue depth (nvme0n2) 00:09:48.692 Could not set queue depth (nvme0n3) 00:09:48.692 Could not set queue depth (nvme0n4) 00:09:48.949 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.949 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.950 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.950 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:48.950 fio-3.35 00:09:48.950 Starting 4 threads 00:09:50.322 00:09:50.322 job0: (groupid=0, jobs=1): err= 0: pid=1109259: Wed Jul 24 19:39:07 2024 00:09:50.322 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:09:50.322 slat (usec): min=2, max=8315, avg=88.66, stdev=519.39 00:09:50.322 clat (usec): min=3740, max=31834, avg=11530.63, stdev=3263.61 00:09:50.322 lat (usec): min=3791, max=31853, avg=11619.29, stdev=3291.56 00:09:50.322 clat percentiles (usec): 00:09:50.322 | 1.00th=[ 5211], 5.00th=[ 6521], 10.00th=[ 7832], 20.00th=[ 9110], 00:09:50.322 | 30.00th=[10028], 40.00th=[11076], 50.00th=[11731], 60.00th=[11994], 00:09:50.322 | 70.00th=[12387], 80.00th=[13042], 90.00th=[14353], 95.00th=[16450], 00:09:50.322 | 99.00th=[25035], 99.50th=[27919], 99.90th=[28181], 99.95th=[28181], 00:09:50.322 | 99.99th=[31851] 00:09:50.322 write: IOPS=5510, BW=21.5MiB/s (22.6MB/s)(21.6MiB/1002msec); 0 zone resets 00:09:50.322 slat (usec): min=3, max=12167, avg=88.66, stdev=451.81 00:09:50.322 clat (usec): min=739, max=32196, avg=12251.64, stdev=4024.44 00:09:50.322 lat (usec): min=762, max=32209, avg=12340.30, stdev=4039.49 00:09:50.322 clat percentiles (usec): 00:09:50.322 | 1.00th=[ 5473], 5.00th=[ 7767], 10.00th=[ 9110], 20.00th=[ 9634], 00:09:50.322 | 30.00th=[ 9896], 40.00th=[11076], 50.00th=[11731], 60.00th=[12256], 00:09:50.322 | 70.00th=[12780], 80.00th=[13566], 90.00th=[16319], 95.00th=[21627], 00:09:50.322 | 99.00th=[26608], 99.50th=[29754], 99.90th=[31065], 99.95th=[31065], 00:09:50.322 | 99.99th=[32113] 00:09:50.322 bw ( KiB/s): min=18584, max=24576, per=35.49%, avg=21580.00, stdev=4236.98, samples=2 00:09:50.322 iops : min= 4646, max= 6144, avg=5395.00, stdev=1059.25, samples=2 00:09:50.322 lat (usec) : 750=0.01% 00:09:50.322 lat (msec) : 2=0.11%, 4=0.17%, 10=29.11%, 20=66.31%, 50=4.28% 00:09:50.322 cpu : usr=5.59%, sys=9.49%, ctx=557, majf=0, minf=1 00:09:50.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:50.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.322 issued rwts: total=5120,5522,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.322 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.322 job1: (groupid=0, jobs=1): err= 0: pid=1109283: Wed Jul 24 19:39:07 2024 00:09:50.322 read: IOPS=4943, BW=19.3MiB/s (20.2MB/s)(19.4MiB/1006msec) 00:09:50.322 slat (usec): min=2, max=26214, avg=103.69, stdev=851.58 00:09:50.322 clat (usec): min=813, max=71327, avg=13795.03, stdev=7603.65 00:09:50.322 lat (usec): min=4870, max=71332, avg=13898.72, stdev=7645.39 00:09:50.322 clat percentiles (usec): 00:09:50.322 | 1.00th=[ 5342], 5.00th=[ 6325], 10.00th=[ 8979], 20.00th=[10028], 00:09:50.322 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11600], 60.00th=[12256], 00:09:50.322 | 70.00th=[13566], 80.00th=[15926], 90.00th=[18744], 95.00th=[27132], 00:09:50.322 | 99.00th=[52691], 99.50th=[54789], 99.90th=[54789], 99.95th=[71828], 00:09:50.322 | 99.99th=[71828] 00:09:50.322 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:09:50.322 slat (usec): min=3, max=11729, avg=78.30, stdev=550.17 00:09:50.322 clat (usec): min=745, max=59825, avg=11530.07, stdev=5542.26 00:09:50.322 lat (usec): min=759, max=59836, avg=11608.37, stdev=5556.90 00:09:50.322 clat percentiles (usec): 00:09:50.322 | 1.00th=[ 3130], 5.00th=[ 6128], 10.00th=[ 7177], 20.00th=[ 8979], 00:09:50.322 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[11076], 60.00th=[11338], 00:09:50.322 | 70.00th=[12125], 80.00th=[13173], 90.00th=[14746], 95.00th=[17695], 00:09:50.322 | 99.00th=[45876], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:09:50.322 | 99.99th=[60031] 00:09:50.322 bw ( KiB/s): min=20480, max=20480, per=33.68%, avg=20480.00, stdev= 0.00, samples=2 00:09:50.322 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:09:50.322 lat (usec) : 750=0.02%, 1000=0.06% 00:09:50.322 lat (msec) : 4=0.72%, 10=25.90%, 20=67.08%, 50=5.24%, 100=0.98% 00:09:50.322 cpu : usr=5.97%, sys=5.57%, ctx=351, majf=0, minf=1 00:09:50.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:50.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.322 issued rwts: total=4973,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.322 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.322 job2: (groupid=0, jobs=1): err= 0: pid=1109285: Wed Jul 24 19:39:07 2024 00:09:50.322 read: IOPS=2651, BW=10.4MiB/s (10.9MB/s)(10.8MiB/1044msec) 00:09:50.322 slat (usec): min=2, max=17929, avg=179.82, stdev=1102.39 00:09:50.322 clat (usec): min=8236, max=62672, avg=24806.72, stdev=11195.52 00:09:50.322 lat (usec): min=8251, max=62708, avg=24986.54, stdev=11257.55 00:09:50.322 clat percentiles (usec): 00:09:50.322 | 1.00th=[11600], 5.00th=[14091], 10.00th=[15664], 20.00th=[17171], 00:09:50.322 | 30.00th=[17695], 40.00th=[17695], 50.00th=[20317], 60.00th=[21890], 00:09:50.322 | 70.00th=[26084], 80.00th=[32375], 90.00th=[42730], 95.00th=[49546], 00:09:50.322 | 99.00th=[58983], 99.50th=[59507], 99.90th=[60031], 99.95th=[60031], 00:09:50.322 | 99.99th=[62653] 00:09:50.322 write: IOPS=2942, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1044msec); 0 zone resets 00:09:50.322 slat (usec): min=3, max=19225, avg=152.59, stdev=912.76 00:09:50.322 clat (usec): min=7563, max=50337, avg=20036.29, stdev=7940.58 00:09:50.322 lat (usec): min=7572, max=50374, avg=20188.88, stdev=8013.81 00:09:50.322 clat percentiles (usec): 00:09:50.322 | 1.00th=[ 8979], 5.00th=[11207], 10.00th=[11600], 20.00th=[12649], 00:09:50.322 | 30.00th=[13566], 40.00th=[15533], 50.00th=[18482], 60.00th=[20055], 00:09:50.322 | 70.00th=[23462], 80.00th=[28443], 90.00th=[32637], 95.00th=[34866], 00:09:50.322 | 99.00th=[36963], 99.50th=[37487], 99.90th=[40633], 99.95th=[47449], 00:09:50.322 | 99.99th=[50594] 00:09:50.322 bw ( KiB/s): min=12288, max=12288, per=20.21%, avg=12288.00, stdev= 0.00, samples=2 00:09:50.322 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:50.322 lat (msec) : 10=1.06%, 20=52.59%, 50=44.25%, 100=2.11% 00:09:50.322 cpu : usr=3.16%, sys=6.33%, ctx=281, majf=0, minf=1 00:09:50.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:50.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.322 issued rwts: total=2768,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.322 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.322 job3: (groupid=0, jobs=1): err= 0: pid=1109286: Wed Jul 24 19:39:07 2024 00:09:50.322 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:09:50.322 slat (usec): min=2, max=26854, avg=270.91, stdev=1714.89 00:09:50.322 clat (msec): min=4, max=144, avg=33.79, stdev=32.39 00:09:50.322 lat (msec): min=4, max=144, avg=34.06, stdev=32.60 00:09:50.322 clat percentiles (msec): 00:09:50.322 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 14], 00:09:50.322 | 30.00th=[ 16], 40.00th=[ 20], 50.00th=[ 24], 60.00th=[ 27], 00:09:50.322 | 70.00th=[ 34], 80.00th=[ 38], 90.00th=[ 97], 95.00th=[ 117], 00:09:50.322 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:09:50.322 | 99.99th=[ 144] 00:09:50.322 write: IOPS=2146, BW=8585KiB/s (8791kB/s)(8628KiB/1005msec); 0 zone resets 00:09:50.322 slat (usec): min=3, max=27340, avg=189.48, stdev=1117.55 00:09:50.322 clat (usec): min=1534, max=96782, avg=25171.70, stdev=15001.81 00:09:50.322 lat (usec): min=3181, max=96790, avg=25361.18, stdev=15081.88 00:09:50.322 clat percentiles (usec): 00:09:50.322 | 1.00th=[ 5407], 5.00th=[ 9110], 10.00th=[12911], 20.00th=[14615], 00:09:50.322 | 30.00th=[17171], 40.00th=[17433], 50.00th=[19006], 60.00th=[23200], 00:09:50.322 | 70.00th=[26870], 80.00th=[34866], 90.00th=[49021], 95.00th=[55837], 00:09:50.322 | 99.00th=[79168], 99.50th=[87557], 99.90th=[88605], 99.95th=[96994], 00:09:50.322 | 99.99th=[96994] 00:09:50.322 bw ( KiB/s): min= 8192, max= 8240, per=13.51%, avg=8216.00, stdev=33.94, samples=2 00:09:50.322 iops : min= 2048, max= 2060, avg=2054.00, stdev= 8.49, samples=2 00:09:50.322 lat (msec) : 2=0.02%, 4=0.29%, 10=5.66%, 20=41.97%, 50=40.83% 00:09:50.322 lat (msec) : 100=6.97%, 250=4.26% 00:09:50.322 cpu : usr=1.39%, sys=3.59%, ctx=214, majf=0, minf=1 00:09:50.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:09:50.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:50.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:50.322 issued rwts: total=2048,2157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:50.322 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:50.322 00:09:50.322 Run status group 0 (all jobs): 00:09:50.322 READ: bw=55.8MiB/s (58.5MB/s), 8151KiB/s-20.0MiB/s (8347kB/s-20.9MB/s), io=58.2MiB (61.1MB), run=1002-1044msec 00:09:50.322 WRITE: bw=59.4MiB/s (62.3MB/s), 8585KiB/s-21.5MiB/s (8791kB/s-22.6MB/s), io=62.0MiB (65.0MB), run=1002-1044msec 00:09:50.322 00:09:50.322 Disk stats (read/write): 00:09:50.322 nvme0n1: ios=4657/4751, merge=0/0, ticks=22892/24721, in_queue=47613, util=84.77% 00:09:50.322 nvme0n2: ios=4151/4608, merge=0/0, ticks=35322/27877, in_queue=63199, util=89.32% 00:09:50.322 nvme0n3: ios=2097/2407, merge=0/0, ticks=22541/21399, in_queue=43940, util=93.30% 00:09:50.322 nvme0n4: ios=1590/1575, merge=0/0, ticks=19825/14650, in_queue=34475, util=94.93% 00:09:50.322 19:39:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:50.322 [global] 00:09:50.322 thread=1 00:09:50.322 invalidate=1 00:09:50.322 rw=randwrite 00:09:50.322 time_based=1 00:09:50.322 runtime=1 00:09:50.322 ioengine=libaio 00:09:50.322 direct=1 00:09:50.322 bs=4096 00:09:50.322 iodepth=128 00:09:50.322 norandommap=0 00:09:50.322 numjobs=1 00:09:50.322 00:09:50.322 verify_dump=1 00:09:50.322 verify_backlog=512 00:09:50.322 verify_state_save=0 00:09:50.322 do_verify=1 00:09:50.322 verify=crc32c-intel 00:09:50.322 [job0] 00:09:50.322 filename=/dev/nvme0n1 00:09:50.322 [job1] 00:09:50.322 filename=/dev/nvme0n2 00:09:50.322 [job2] 00:09:50.322 filename=/dev/nvme0n3 00:09:50.322 [job3] 00:09:50.322 filename=/dev/nvme0n4 00:09:50.322 Could not set queue depth (nvme0n1) 00:09:50.322 Could not set queue depth (nvme0n2) 00:09:50.322 Could not set queue depth (nvme0n3) 00:09:50.322 Could not set queue depth (nvme0n4) 00:09:50.322 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:50.322 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:50.322 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:50.322 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:50.322 fio-3.35 00:09:50.322 Starting 4 threads 00:09:51.694 00:09:51.694 job0: (groupid=0, jobs=1): err= 0: pid=1109673: Wed Jul 24 19:39:08 2024 00:09:51.694 read: IOPS=3898, BW=15.2MiB/s (16.0MB/s)(15.4MiB/1010msec) 00:09:51.694 slat (usec): min=3, max=16087, avg=121.00, stdev=870.41 00:09:51.694 clat (usec): min=3301, max=43760, avg=15397.09, stdev=6698.03 00:09:51.694 lat (usec): min=4674, max=43777, avg=15518.09, stdev=6757.10 00:09:51.694 clat percentiles (usec): 00:09:51.694 | 1.00th=[ 7832], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11469], 00:09:51.694 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12256], 60.00th=[13304], 00:09:51.694 | 70.00th=[15270], 80.00th=[19268], 90.00th=[23462], 95.00th=[31851], 00:09:51.694 | 99.00th=[38011], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:09:51.694 | 99.99th=[43779] 00:09:51.694 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:09:51.694 slat (usec): min=3, max=18110, avg=116.02, stdev=724.47 00:09:51.694 clat (usec): min=1394, max=43730, avg=16470.99, stdev=7477.94 00:09:51.694 lat (usec): min=1405, max=43738, avg=16587.01, stdev=7553.14 00:09:51.694 clat percentiles (usec): 00:09:51.694 | 1.00th=[ 4424], 5.00th=[ 7767], 10.00th=[ 9241], 20.00th=[11076], 00:09:51.694 | 30.00th=[11731], 40.00th=[12387], 50.00th=[12649], 60.00th=[16319], 00:09:51.694 | 70.00th=[21627], 80.00th=[22938], 90.00th=[24511], 95.00th=[32637], 00:09:51.694 | 99.00th=[35914], 99.50th=[35914], 99.90th=[41157], 99.95th=[43779], 00:09:51.694 | 99.99th=[43779] 00:09:51.694 bw ( KiB/s): min=14640, max=18128, per=24.43%, avg=16384.00, stdev=2466.39, samples=2 00:09:51.694 iops : min= 3660, max= 4532, avg=4096.00, stdev=616.60, samples=2 00:09:51.694 lat (msec) : 2=0.07%, 4=0.26%, 10=8.71%, 20=62.88%, 50=28.07% 00:09:51.694 cpu : usr=4.36%, sys=6.64%, ctx=424, majf=0, minf=1 00:09:51.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:51.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:51.694 issued rwts: total=3937,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.694 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:51.694 job1: (groupid=0, jobs=1): err= 0: pid=1109674: Wed Jul 24 19:39:08 2024 00:09:51.694 read: IOPS=5081, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:09:51.694 slat (usec): min=2, max=18789, avg=101.99, stdev=717.25 00:09:51.694 clat (usec): min=1742, max=32841, avg=12505.53, stdev=3736.00 00:09:51.694 lat (usec): min=3942, max=42817, avg=12607.52, stdev=3794.90 00:09:51.694 clat percentiles (usec): 00:09:51.694 | 1.00th=[ 5342], 5.00th=[ 7439], 10.00th=[ 9896], 20.00th=[10421], 00:09:51.694 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11731], 60.00th=[11994], 00:09:51.694 | 70.00th=[12518], 80.00th=[14222], 90.00th=[18220], 95.00th=[20317], 00:09:51.694 | 99.00th=[24249], 99.50th=[28181], 99.90th=[32113], 99.95th=[32375], 00:09:51.694 | 99.99th=[32900] 00:09:51.694 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:09:51.694 slat (usec): min=3, max=9643, avg=87.14, stdev=405.23 00:09:51.694 clat (usec): min=2518, max=46484, avg=12356.17, stdev=5584.99 00:09:51.694 lat (usec): min=2530, max=46498, avg=12443.31, stdev=5622.30 00:09:51.694 clat percentiles (usec): 00:09:51.694 | 1.00th=[ 3359], 5.00th=[ 6783], 10.00th=[ 7767], 20.00th=[10421], 00:09:51.694 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11731], 00:09:51.694 | 70.00th=[12125], 80.00th=[12518], 90.00th=[17171], 95.00th=[22414], 00:09:51.694 | 99.00th=[40109], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:09:51.694 | 99.99th=[46400] 00:09:51.694 bw ( KiB/s): min=20320, max=20640, per=30.54%, avg=20480.00, stdev=226.27, samples=2 00:09:51.694 iops : min= 5080, max= 5160, avg=5120.00, stdev=56.57, samples=2 00:09:51.694 lat (msec) : 2=0.01%, 4=0.78%, 10=12.51%, 20=78.82%, 50=7.88% 00:09:51.694 cpu : usr=4.78%, sys=5.97%, ctx=635, majf=0, minf=1 00:09:51.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:51.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:51.694 issued rwts: total=5112,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.694 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:51.694 job2: (groupid=0, jobs=1): err= 0: pid=1109675: Wed Jul 24 19:39:08 2024 00:09:51.694 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:09:51.694 slat (usec): min=3, max=4429, avg=102.65, stdev=540.89 00:09:51.694 clat (usec): min=9279, max=18979, avg=13253.50, stdev=1348.45 00:09:51.694 lat (usec): min=9288, max=18999, avg=13356.15, stdev=1394.30 00:09:51.694 clat percentiles (usec): 00:09:51.694 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11338], 20.00th=[12387], 00:09:51.694 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13435], 60.00th=[13698], 00:09:51.694 | 70.00th=[13960], 80.00th=[14222], 90.00th=[14615], 95.00th=[15139], 00:09:51.694 | 99.00th=[16909], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:09:51.694 | 99.99th=[19006] 00:09:51.694 write: IOPS=4808, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1002msec); 0 zone resets 00:09:51.694 slat (usec): min=4, max=4832, avg=100.18, stdev=483.89 00:09:51.694 clat (usec): min=570, max=31771, avg=13575.59, stdev=2886.78 00:09:51.694 lat (usec): min=4349, max=31790, avg=13675.77, stdev=2919.56 00:09:51.694 clat percentiles (usec): 00:09:51.694 | 1.00th=[ 8455], 5.00th=[11600], 10.00th=[11994], 20.00th=[12387], 00:09:51.694 | 30.00th=[12649], 40.00th=[12780], 50.00th=[13042], 60.00th=[13435], 00:09:51.694 | 70.00th=[13698], 80.00th=[13960], 90.00th=[15008], 95.00th=[17171], 00:09:51.694 | 99.00th=[29754], 99.50th=[31589], 99.90th=[31589], 99.95th=[31851], 00:09:51.694 | 99.99th=[31851] 00:09:51.694 bw ( KiB/s): min=17800, max=19720, per=27.98%, avg=18760.00, stdev=1357.65, samples=2 00:09:51.694 iops : min= 4450, max= 4930, avg=4690.00, stdev=339.41, samples=2 00:09:51.694 lat (usec) : 750=0.01% 00:09:51.694 lat (msec) : 10=2.09%, 20=96.30%, 50=1.60% 00:09:51.694 cpu : usr=4.50%, sys=9.99%, ctx=481, majf=0, minf=1 00:09:51.694 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:51.694 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.694 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:51.694 issued rwts: total=4608,4818,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.694 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:51.694 job3: (groupid=0, jobs=1): err= 0: pid=1109676: Wed Jul 24 19:39:08 2024 00:09:51.694 read: IOPS=3144, BW=12.3MiB/s (12.9MB/s)(12.9MiB/1051msec) 00:09:51.694 slat (usec): min=2, max=23544, avg=158.40, stdev=1125.85 00:09:51.694 clat (msec): min=4, max=139, avg=21.39, stdev=21.13 00:09:51.694 lat (msec): min=4, max=139, avg=21.55, stdev=21.25 00:09:51.694 clat percentiles (msec): 00:09:51.694 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 12], 20.00th=[ 13], 00:09:51.694 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 17], 00:09:51.694 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 27], 95.00th=[ 79], 00:09:51.694 | 99.00th=[ 122], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:09:51.694 | 99.99th=[ 140] 00:09:51.694 write: IOPS=3410, BW=13.3MiB/s (14.0MB/s)(14.0MiB/1051msec); 0 zone resets 00:09:51.694 slat (usec): min=3, max=28406, avg=121.87, stdev=873.69 00:09:51.694 clat (usec): min=2633, max=51784, avg=17409.67, stdev=7052.43 00:09:51.694 lat (usec): min=2646, max=51819, avg=17531.54, stdev=7096.98 00:09:51.694 clat percentiles (usec): 00:09:51.694 | 1.00th=[ 3654], 5.00th=[ 7570], 10.00th=[10290], 20.00th=[12649], 00:09:51.695 | 30.00th=[14091], 40.00th=[14877], 50.00th=[15533], 60.00th=[17695], 00:09:51.695 | 70.00th=[20841], 80.00th=[22938], 90.00th=[24249], 95.00th=[29754], 00:09:51.695 | 99.00th=[44303], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:09:51.695 | 99.99th=[51643] 00:09:51.695 bw ( KiB/s): min=12528, max=16144, per=21.38%, avg=14336.00, stdev=2556.90, samples=2 00:09:51.695 iops : min= 3132, max= 4036, avg=3584.00, stdev=639.22, samples=2 00:09:51.695 lat (msec) : 4=0.52%, 10=7.33%, 20=62.26%, 50=26.52%, 100=2.13% 00:09:51.695 lat (msec) : 250=1.23% 00:09:51.695 cpu : usr=2.86%, sys=4.76%, ctx=381, majf=0, minf=1 00:09:51.695 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:51.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:51.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:51.695 issued rwts: total=3305,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:51.695 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:51.695 00:09:51.695 Run status group 0 (all jobs): 00:09:51.695 READ: bw=63.0MiB/s (66.1MB/s), 12.3MiB/s-19.8MiB/s (12.9MB/s-20.8MB/s), io=66.3MiB (69.5MB), run=1002-1051msec 00:09:51.695 WRITE: bw=65.5MiB/s (68.7MB/s), 13.3MiB/s-19.9MiB/s (14.0MB/s-20.8MB/s), io=68.8MiB (72.2MB), run=1002-1051msec 00:09:51.695 00:09:51.695 Disk stats (read/write): 00:09:51.695 nvme0n1: ios=3113/3455, merge=0/0, ticks=46755/56066, in_queue=102821, util=86.07% 00:09:51.695 nvme0n2: ios=4140/4287, merge=0/0, ticks=41907/44026, in_queue=85933, util=98.68% 00:09:51.695 nvme0n3: ios=3807/4096, merge=0/0, ticks=17689/17492, in_queue=35181, util=97.49% 00:09:51.695 nvme0n4: ios=3119/3432, merge=0/0, ticks=34755/45840, in_queue=80595, util=98.63% 00:09:51.695 19:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:51.695 19:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1110080 00:09:51.695 19:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:51.695 19:39:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:51.695 [global] 00:09:51.695 thread=1 00:09:51.695 invalidate=1 00:09:51.695 rw=read 00:09:51.695 time_based=1 00:09:51.695 runtime=10 00:09:51.695 ioengine=libaio 00:09:51.695 direct=1 00:09:51.695 bs=4096 00:09:51.695 iodepth=1 00:09:51.695 norandommap=1 00:09:51.695 numjobs=1 00:09:51.695 00:09:51.695 [job0] 00:09:51.695 filename=/dev/nvme0n1 00:09:51.695 [job1] 00:09:51.695 filename=/dev/nvme0n2 00:09:51.695 [job2] 00:09:51.695 filename=/dev/nvme0n3 00:09:51.695 [job3] 00:09:51.695 filename=/dev/nvme0n4 00:09:51.695 Could not set queue depth (nvme0n1) 00:09:51.695 Could not set queue depth (nvme0n2) 00:09:51.695 Could not set queue depth (nvme0n3) 00:09:51.695 Could not set queue depth (nvme0n4) 00:09:51.695 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.695 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.695 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.695 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:51.695 fio-3.35 00:09:51.695 Starting 4 threads 00:09:54.971 19:39:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:54.971 19:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:54.971 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3031040, buflen=4096 00:09:54.971 fio: pid=1110250, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:55.228 19:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:55.228 19:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:55.228 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=5152768, buflen=4096 00:09:55.229 fio: pid=1110249, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:55.486 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=43835392, buflen=4096 00:09:55.486 fio: pid=1110235, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:55.486 19:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:55.486 19:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:55.744 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=10227712, buflen=4096 00:09:55.744 fio: pid=1110239, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:55.744 19:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:55.744 19:39:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:55.744 00:09:55.744 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1110235: Wed Jul 24 19:39:13 2024 00:09:55.744 read: IOPS=3106, BW=12.1MiB/s (12.7MB/s)(41.8MiB/3445msec) 00:09:55.744 slat (usec): min=4, max=32680, avg=16.06, stdev=342.96 00:09:55.744 clat (usec): min=226, max=41144, avg=300.76, stdev=682.17 00:09:55.744 lat (usec): min=232, max=41151, avg=316.83, stdev=763.74 00:09:55.744 clat percentiles (usec): 00:09:55.744 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 253], 00:09:55.744 | 30.00th=[ 260], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 293], 00:09:55.744 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 334], 95.00th=[ 351], 00:09:55.744 | 99.00th=[ 408], 99.50th=[ 449], 99.90th=[ 603], 99.95th=[ 2073], 00:09:55.744 | 99.99th=[41157] 00:09:55.744 bw ( KiB/s): min=12168, max=14056, per=79.05%, avg=12938.67, stdev=770.87, samples=6 00:09:55.744 iops : min= 3042, max= 3514, avg=3234.67, stdev=192.72, samples=6 00:09:55.744 lat (usec) : 250=15.85%, 500=83.81%, 750=0.26% 00:09:55.744 lat (msec) : 2=0.02%, 4=0.03%, 50=0.03% 00:09:55.744 cpu : usr=2.29%, sys=4.85%, ctx=10709, majf=0, minf=1 00:09:55.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.744 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.744 issued rwts: total=10703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.744 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1110239: Wed Jul 24 19:39:13 2024 00:09:55.744 read: IOPS=672, BW=2689KiB/s (2754kB/s)(9988KiB/3714msec) 00:09:55.744 slat (usec): min=4, max=22931, avg=45.08, stdev=808.10 00:09:55.744 clat (usec): min=215, max=42172, avg=1436.38, stdev=6770.66 00:09:55.744 lat (usec): min=220, max=42196, avg=1481.47, stdev=6814.77 00:09:55.744 clat percentiles (usec): 00:09:55.744 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 243], 00:09:55.744 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 269], 60.00th=[ 281], 00:09:55.744 | 70.00th=[ 297], 80.00th=[ 310], 90.00th=[ 326], 95.00th=[ 404], 00:09:55.744 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:55.744 | 99.99th=[42206] 00:09:55.744 bw ( KiB/s): min= 104, max=11197, per=12.40%, avg=2029.29, stdev=4094.51, samples=7 00:09:55.744 iops : min= 26, max= 2799, avg=507.29, stdev=1023.53, samples=7 00:09:55.744 lat (usec) : 250=30.02%, 500=66.69%, 750=0.28%, 1000=0.04% 00:09:55.744 lat (msec) : 2=0.04%, 10=0.04%, 50=2.84% 00:09:55.744 cpu : usr=0.30%, sys=0.73%, ctx=2506, majf=0, minf=1 00:09:55.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.744 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.744 issued rwts: total=2498,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.744 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1110249: Wed Jul 24 19:39:13 2024 00:09:55.744 read: IOPS=396, BW=1583KiB/s (1621kB/s)(5032KiB/3179msec) 00:09:55.744 slat (nsec): min=4552, max=38762, avg=10713.63, stdev=5037.91 00:09:55.744 clat (usec): min=242, max=42040, avg=2495.77, stdev=9160.69 00:09:55.744 lat (usec): min=251, max=42057, avg=2506.48, stdev=9162.25 00:09:55.744 clat percentiles (usec): 00:09:55.744 | 1.00th=[ 251], 5.00th=[ 258], 10.00th=[ 260], 20.00th=[ 269], 00:09:55.744 | 30.00th=[ 277], 40.00th=[ 285], 50.00th=[ 302], 60.00th=[ 314], 00:09:55.744 | 70.00th=[ 347], 80.00th=[ 379], 90.00th=[ 396], 95.00th=[41157], 00:09:55.744 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:55.744 | 99.99th=[42206] 00:09:55.744 bw ( KiB/s): min= 96, max= 3992, per=6.66%, avg=1090.67, stdev=1600.69, samples=6 00:09:55.744 iops : min= 24, max= 998, avg=272.67, stdev=400.17, samples=6 00:09:55.744 lat (usec) : 250=0.95%, 500=93.33%, 1000=0.16% 00:09:55.744 lat (msec) : 4=0.08%, 20=0.08%, 50=5.32% 00:09:55.744 cpu : usr=0.13%, sys=0.53%, ctx=1259, majf=0, minf=1 00:09:55.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.744 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.744 issued rwts: total=1259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.744 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1110250: Wed Jul 24 19:39:13 2024 00:09:55.744 read: IOPS=253, BW=1013KiB/s (1038kB/s)(2960KiB/2921msec) 00:09:55.744 slat (nsec): min=5303, max=35220, avg=8642.80, stdev=5111.95 00:09:55.744 clat (usec): min=260, max=42151, avg=3906.36, stdev=11568.61 00:09:55.744 lat (usec): min=266, max=42165, avg=3914.99, stdev=11571.09 00:09:55.744 clat percentiles (usec): 00:09:55.744 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 297], 00:09:55.744 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 322], 00:09:55.744 | 70.00th=[ 334], 80.00th=[ 355], 90.00th=[ 412], 95.00th=[41157], 00:09:55.744 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:55.744 | 99.99th=[42206] 00:09:55.744 bw ( KiB/s): min= 128, max= 4976, per=7.14%, avg=1168.00, stdev=2130.73, samples=5 00:09:55.744 iops : min= 32, max= 1244, avg=292.00, stdev=532.68, samples=5 00:09:55.744 lat (usec) : 500=90.96%, 1000=0.13% 00:09:55.744 lat (msec) : 50=8.77% 00:09:55.744 cpu : usr=0.27%, sys=0.21%, ctx=742, majf=0, minf=1 00:09:55.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:55.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.744 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:55.744 issued rwts: total=741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:55.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:55.744 00:09:55.744 Run status group 0 (all jobs): 00:09:55.744 READ: bw=16.0MiB/s (16.8MB/s), 1013KiB/s-12.1MiB/s (1038kB/s-12.7MB/s), io=59.4MiB (62.2MB), run=2921-3714msec 00:09:55.744 00:09:55.744 Disk stats (read/write): 00:09:55.744 nvme0n1: ios=10409/0, merge=0/0, ticks=3252/0, in_queue=3252, util=98.74% 00:09:55.744 nvme0n2: ios=1993/0, merge=0/0, ticks=4525/0, in_queue=4525, util=97.64% 00:09:55.744 nvme0n3: ios=1002/0, merge=0/0, ticks=3044/0, in_queue=3044, util=96.66% 00:09:55.744 nvme0n4: ios=738/0, merge=0/0, ticks=2808/0, in_queue=2808, util=96.67% 00:09:56.002 19:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:56.002 19:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:56.258 19:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:56.258 19:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:56.514 19:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:56.514 19:39:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:56.772 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:56.772 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:57.030 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:57.030 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1110080 00:09:57.030 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:57.030 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:57.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.030 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:57.030 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # local i=0 00:09:57.030 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # lsblk -o NAME,SERIAL 00:09:57.030 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.288 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME,SERIAL 00:09:57.288 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1228 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.288 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1232 -- # return 0 00:09:57.288 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:57.288 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:57.288 nvmf hotplug test: fio failed as expected 00:09:57.288 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.544 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:57.544 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:57.544 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:57.544 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:57.544 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:57.544 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # nvmfcleanup 00:09:57.544 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:57.544 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:57.544 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:57.544 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:57.544 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:57.544 rmmod nvme_tcp 00:09:57.544 rmmod nvme_fabrics 00:09:57.544 rmmod nvme_keyring 00:09:57.544 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:57.544 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:57.544 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:57.544 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # '[' -n 1107504 ']' 00:09:57.545 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # killprocess 1107504 00:09:57.545 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' -z 1107504 ']' 00:09:57.545 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # kill -0 1107504 00:09:57.545 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # uname 00:09:57.545 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:09:57.545 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1107504 00:09:57.545 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:09:57.545 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:09:57.545 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1107504' 00:09:57.545 killing process with pid 1107504 00:09:57.545 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # kill 1107504 00:09:57.545 19:39:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@975 -- # wait 1107504 00:09:57.801 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:09:57.801 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:09:57.801 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:09:57.801 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.801 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@282 -- # remove_spdk_ns 00:09:57.801 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.801 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.801 19:39:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.761 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:09:59.761 00:09:59.761 real 0m24.148s 00:09:59.761 user 1m24.800s 00:09:59.761 sys 0m6.668s 00:09:59.761 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # xtrace_disable 00:09:59.761 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.761 ************************************ 00:09:59.761 END TEST nvmf_fio_target 00:09:59.761 ************************************ 00:09:59.761 19:39:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:59.761 19:39:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:09:59.761 19:39:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1108 -- # xtrace_disable 00:09:59.761 19:39:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.018 ************************************ 00:10:00.019 START TEST nvmf_bdevio 00:10:00.019 ************************************ 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:00.019 * Looking for test storage... 00:10:00.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@452 -- # prepare_net_devs 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # local -g is_hw=no 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # remove_spdk_ns 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # xtrace_disable 00:10:00.019 19:39:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # pci_devs=() 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -a pci_devs 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # pci_net_devs=() 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # pci_drivers=() 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -A pci_drivers 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # net_devs=() 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # local -ga net_devs 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # e810=() 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # local -ga e810 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # x722=() 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # local -ga x722 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # mlx=() 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # local -ga mlx 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:01.921 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:01.921 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # [[ up == up ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:01.921 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # [[ up == up ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:01.921 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # is_hw=yes 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.921 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:10:02.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:10:02.179 00:10:02.179 --- 10.0.0.2 ping statistics --- 00:10:02.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.179 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:10:02.179 00:10:02.179 --- 10.0.0.1 ping statistics --- 00:10:02.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.179 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # return 0 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@725 -- # xtrace_disable 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@485 -- # nvmfpid=1112889 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@486 -- # waitforlisten 1112889 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@832 -- # '[' -z 1112889 ']' 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:02.179 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.179 [2024-07-24 19:39:19.489336] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:10:02.179 [2024-07-24 19:39:19.489414] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.179 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.179 [2024-07-24 19:39:19.554276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.437 [2024-07-24 19:39:19.670996] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.437 [2024-07-24 19:39:19.671053] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.437 [2024-07-24 19:39:19.671070] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.437 [2024-07-24 19:39:19.671083] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.437 [2024-07-24 19:39:19.671094] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.437 [2024-07-24 19:39:19.671205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:02.437 [2024-07-24 19:39:19.671303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:02.437 [2024-07-24 19:39:19.671358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:02.437 [2024-07-24 19:39:19.671362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.437 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:02.437 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@865 -- # return 0 00:10:02.437 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:10:02.437 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@731 -- # xtrace_disable 00:10:02.437 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.694 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.694 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.695 [2024-07-24 19:39:19.830799] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.695 Malloc0 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:02.695 [2024-07-24 19:39:19.884482] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@536 -- # config=() 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@536 -- # local subsystem config 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:10:02.695 { 00:10:02.695 "params": { 00:10:02.695 "name": "Nvme$subsystem", 00:10:02.695 "trtype": "$TEST_TRANSPORT", 00:10:02.695 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:02.695 "adrfam": "ipv4", 00:10:02.695 "trsvcid": "$NVMF_PORT", 00:10:02.695 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:02.695 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:02.695 "hdgst": ${hdgst:-false}, 00:10:02.695 "ddgst": ${ddgst:-false} 00:10:02.695 }, 00:10:02.695 "method": "bdev_nvme_attach_controller" 00:10:02.695 } 00:10:02.695 EOF 00:10:02.695 )") 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # cat 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # jq . 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@561 -- # IFS=, 00:10:02.695 19:39:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:10:02.695 "params": { 00:10:02.695 "name": "Nvme1", 00:10:02.695 "trtype": "tcp", 00:10:02.695 "traddr": "10.0.0.2", 00:10:02.695 "adrfam": "ipv4", 00:10:02.695 "trsvcid": "4420", 00:10:02.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:02.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:02.695 "hdgst": false, 00:10:02.695 "ddgst": false 00:10:02.695 }, 00:10:02.695 "method": "bdev_nvme_attach_controller" 00:10:02.695 }' 00:10:02.695 [2024-07-24 19:39:19.932455] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:10:02.695 [2024-07-24 19:39:19.932539] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1113034 ] 00:10:02.695 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.695 [2024-07-24 19:39:19.992322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:02.953 [2024-07-24 19:39:20.110172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.953 [2024-07-24 19:39:20.110218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.953 [2024-07-24 19:39:20.110222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.211 I/O targets: 00:10:03.211 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:03.211 00:10:03.211 00:10:03.211 CUnit - A unit testing framework for C - Version 2.1-3 00:10:03.211 http://cunit.sourceforge.net/ 00:10:03.211 00:10:03.211 00:10:03.211 Suite: bdevio tests on: Nvme1n1 00:10:03.211 Test: blockdev write read block ...passed 00:10:03.211 Test: blockdev write zeroes read block ...passed 00:10:03.211 Test: blockdev write zeroes read no split ...passed 00:10:03.211 Test: blockdev write zeroes read split ...passed 00:10:03.468 Test: blockdev write zeroes read split partial ...passed 00:10:03.468 Test: blockdev reset ...[2024-07-24 19:39:20.616130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:03.468 [2024-07-24 19:39:20.616237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14f5580 (9): Bad file descriptor 00:10:03.469 [2024-07-24 19:39:20.636704] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:03.469 passed 00:10:03.469 Test: blockdev write read 8 blocks ...passed 00:10:03.469 Test: blockdev write read size > 128k ...passed 00:10:03.469 Test: blockdev write read invalid size ...passed 00:10:03.469 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:03.469 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:03.469 Test: blockdev write read max offset ...passed 00:10:03.469 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:03.469 Test: blockdev writev readv 8 blocks ...passed 00:10:03.469 Test: blockdev writev readv 30 x 1block ...passed 00:10:03.469 Test: blockdev writev readv block ...passed 00:10:03.469 Test: blockdev writev readv size > 128k ...passed 00:10:03.469 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:03.469 Test: blockdev comparev and writev ...[2024-07-24 19:39:20.809671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.469 [2024-07-24 19:39:20.809706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:03.469 [2024-07-24 19:39:20.809730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.469 [2024-07-24 19:39:20.809747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:03.469 [2024-07-24 19:39:20.810067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.469 [2024-07-24 19:39:20.810092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:03.469 [2024-07-24 19:39:20.810114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.469 [2024-07-24 19:39:20.810129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:03.469 [2024-07-24 19:39:20.810448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.469 [2024-07-24 19:39:20.810472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:03.469 [2024-07-24 19:39:20.810494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.469 [2024-07-24 19:39:20.810510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:03.469 [2024-07-24 19:39:20.810827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.469 [2024-07-24 19:39:20.810859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:03.469 [2024-07-24 19:39:20.810880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:03.469 [2024-07-24 19:39:20.810896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:03.727 passed 00:10:03.727 Test: blockdev nvme passthru rw ...passed 00:10:03.727 Test: blockdev nvme passthru vendor specific ...[2024-07-24 19:39:20.893548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.727 [2024-07-24 19:39:20.893575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:03.727 [2024-07-24 19:39:20.893743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.727 [2024-07-24 19:39:20.893766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:03.727 [2024-07-24 19:39:20.893927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.727 [2024-07-24 19:39:20.893949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:03.727 [2024-07-24 19:39:20.894113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:03.727 [2024-07-24 19:39:20.894136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:03.727 passed 00:10:03.727 Test: blockdev nvme admin passthru ...passed 00:10:03.727 Test: blockdev copy ...passed 00:10:03.727 00:10:03.727 Run Summary: Type Total Ran Passed Failed Inactive 00:10:03.727 suites 1 1 n/a 0 0 00:10:03.727 tests 23 23 23 0 0 00:10:03.727 asserts 152 152 152 0 n/a 00:10:03.727 00:10:03.727 Elapsed time = 0.981 seconds 00:10:03.984 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:03.984 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:03.984 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:03.984 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:03.984 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:03.984 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:03.984 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # nvmfcleanup 00:10:03.984 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:03.984 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:03.984 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:03.985 rmmod nvme_tcp 00:10:03.985 rmmod nvme_fabrics 00:10:03.985 rmmod nvme_keyring 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # '[' -n 1112889 ']' 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # killprocess 1112889 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' -z 1112889 ']' 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # kill -0 1112889 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # uname 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1112889 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # process_name=reactor_3 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@961 -- # '[' reactor_3 = sudo ']' 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1112889' 00:10:03.985 killing process with pid 1112889 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # kill 1112889 00:10:03.985 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@975 -- # wait 1112889 00:10:04.243 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:10:04.243 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:10:04.243 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:10:04.243 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:04.243 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@282 -- # remove_spdk_ns 00:10:04.243 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.243 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.243 19:39:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:10:06.777 00:10:06.777 real 0m6.467s 00:10:06.777 user 0m10.671s 00:10:06.777 sys 0m2.065s 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:06.777 ************************************ 00:10:06.777 END TEST nvmf_bdevio 00:10:06.777 ************************************ 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:06.777 00:10:06.777 real 3m54.016s 00:10:06.777 user 10m13.504s 00:10:06.777 sys 1m8.641s 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.777 ************************************ 00:10:06.777 END TEST nvmf_target_core 00:10:06.777 ************************************ 00:10:06.777 19:39:23 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:06.777 19:39:23 nvmf_tcp -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:10:06.777 19:39:23 nvmf_tcp -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:06.777 19:39:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:06.777 ************************************ 00:10:06.777 START TEST nvmf_target_extra 00:10:06.777 ************************************ 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:06.777 * Looking for test storage... 00:10:06.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:06.777 19:39:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:06.777 ************************************ 00:10:06.777 START TEST nvmf_example 00:10:06.777 ************************************ 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:06.778 * Looking for test storage... 00:10:06.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:06.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@725 -- # xtrace_disable 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@452 -- # prepare_net_devs 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # local -g is_hw=no 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # remove_spdk_ns 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # xtrace_disable 00:10:06.778 19:39:23 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # pci_devs=() 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -a pci_devs 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # pci_net_devs=() 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # pci_drivers=() 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -A pci_drivers 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # net_devs=() 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # local -ga net_devs 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # e810=() 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # local -ga e810 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # x722=() 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # local -ga x722 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # mlx=() 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # local -ga mlx 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:08.679 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:08.679 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # [[ up == up ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:08.679 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # [[ up == up ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:08.679 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # is_hw=yes 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.679 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.680 19:39:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:10:08.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:10:08.680 00:10:08.680 --- 10.0.0.2 ping statistics --- 00:10:08.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.680 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:10:08.680 00:10:08.680 --- 10.0.0.1 ping statistics --- 00:10:08.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.680 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # return 0 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@725 -- # xtrace_disable 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1115159 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1115159 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@832 -- # '[' -z 1115159 ']' 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:08.680 19:39:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.938 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.869 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:09.869 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@865 -- # return 0 00:10:09.869 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:09.869 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@731 -- # xtrace_disable 00:10:09.869 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:09.869 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:09.869 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:09.869 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:09.869 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:09.869 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:09.870 19:39:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:09.870 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.060 Initializing NVMe Controllers 00:10:22.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:22.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:22.060 Initialization complete. Launching workers. 00:10:22.060 ======================================================== 00:10:22.060 Latency(us) 00:10:22.060 Device Information : IOPS MiB/s Average min max 00:10:22.060 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14791.50 57.78 4328.30 868.62 16301.39 00:10:22.060 ======================================================== 00:10:22.060 Total : 14791.50 57.78 4328.30 868.62 16301.39 00:10:22.060 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # nvmfcleanup 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:22.060 rmmod nvme_tcp 00:10:22.060 rmmod nvme_fabrics 00:10:22.060 rmmod nvme_keyring 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # '[' -n 1115159 ']' 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # killprocess 1115159 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@951 -- # '[' -z 1115159 ']' 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # kill -0 1115159 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # uname 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1115159 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # process_name=nvmf 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@961 -- # '[' nvmf = sudo ']' 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1115159' 00:10:22.060 killing process with pid 1115159 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # kill 1115159 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@975 -- # wait 1115159 00:10:22.060 nvmf threads initialize successfully 00:10:22.060 bdev subsystem init successfully 00:10:22.060 created a nvmf target service 00:10:22.060 create targets's poll groups done 00:10:22.060 all subsystems of target started 00:10:22.060 nvmf target is running 00:10:22.060 all subsystems of target stopped 00:10:22.060 destroy targets's poll groups done 00:10:22.060 destroyed the nvmf target service 00:10:22.060 bdev subsystem finish successfully 00:10:22.060 nvmf threads destroy successfully 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@282 -- # remove_spdk_ns 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.060 19:39:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.629 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:10:22.629 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:22.629 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@731 -- # xtrace_disable 00:10:22.629 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.629 00:10:22.629 real 0m16.049s 00:10:22.629 user 0m45.681s 00:10:22.629 sys 0m3.196s 00:10:22.629 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:22.629 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:22.630 ************************************ 00:10:22.630 END TEST nvmf_example 00:10:22.630 ************************************ 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:22.630 ************************************ 00:10:22.630 START TEST nvmf_filesystem 00:10:22.630 ************************************ 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:22.630 * Looking for test storage... 00:10:22.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:22.630 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:22.631 #define SPDK_CONFIG_H 00:10:22.631 #define SPDK_CONFIG_APPS 1 00:10:22.631 #define SPDK_CONFIG_ARCH native 00:10:22.631 #undef SPDK_CONFIG_ASAN 00:10:22.631 #undef SPDK_CONFIG_AVAHI 00:10:22.631 #undef SPDK_CONFIG_CET 00:10:22.631 #define SPDK_CONFIG_COVERAGE 1 00:10:22.631 #define SPDK_CONFIG_CROSS_PREFIX 00:10:22.631 #undef SPDK_CONFIG_CRYPTO 00:10:22.631 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:22.631 #undef SPDK_CONFIG_CUSTOMOCF 00:10:22.631 #undef SPDK_CONFIG_DAOS 00:10:22.631 #define SPDK_CONFIG_DAOS_DIR 00:10:22.631 #define SPDK_CONFIG_DEBUG 1 00:10:22.631 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:22.631 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:22.631 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:22.631 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:22.631 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:22.631 #undef SPDK_CONFIG_DPDK_UADK 00:10:22.631 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:22.631 #define SPDK_CONFIG_EXAMPLES 1 00:10:22.631 #undef SPDK_CONFIG_FC 00:10:22.631 #define SPDK_CONFIG_FC_PATH 00:10:22.631 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:22.631 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:22.631 #undef SPDK_CONFIG_FUSE 00:10:22.631 #undef SPDK_CONFIG_FUZZER 00:10:22.631 #define SPDK_CONFIG_FUZZER_LIB 00:10:22.631 #undef SPDK_CONFIG_GOLANG 00:10:22.631 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:22.631 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:22.631 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:22.631 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:22.631 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:22.631 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:22.631 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:22.631 #define SPDK_CONFIG_IDXD 1 00:10:22.631 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:22.631 #undef SPDK_CONFIG_IPSEC_MB 00:10:22.631 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:22.631 #define SPDK_CONFIG_ISAL 1 00:10:22.631 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:22.631 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:22.631 #define SPDK_CONFIG_LIBDIR 00:10:22.631 #undef SPDK_CONFIG_LTO 00:10:22.631 #define SPDK_CONFIG_MAX_LCORES 128 00:10:22.631 #define SPDK_CONFIG_NVME_CUSE 1 00:10:22.631 #undef SPDK_CONFIG_OCF 00:10:22.631 #define SPDK_CONFIG_OCF_PATH 00:10:22.631 #define SPDK_CONFIG_OPENSSL_PATH 00:10:22.631 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:22.631 #define SPDK_CONFIG_PGO_DIR 00:10:22.631 #undef SPDK_CONFIG_PGO_USE 00:10:22.631 #define SPDK_CONFIG_PREFIX /usr/local 00:10:22.631 #undef SPDK_CONFIG_RAID5F 00:10:22.631 #undef SPDK_CONFIG_RBD 00:10:22.631 #define SPDK_CONFIG_RDMA 1 00:10:22.631 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:22.631 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:22.631 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:22.631 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:22.631 #define SPDK_CONFIG_SHARED 1 00:10:22.631 #undef SPDK_CONFIG_SMA 00:10:22.631 #define SPDK_CONFIG_TESTS 1 00:10:22.631 #undef SPDK_CONFIG_TSAN 00:10:22.631 #define SPDK_CONFIG_UBLK 1 00:10:22.631 #define SPDK_CONFIG_UBSAN 1 00:10:22.631 #undef SPDK_CONFIG_UNIT_TESTS 00:10:22.631 #undef SPDK_CONFIG_URING 00:10:22.631 #define SPDK_CONFIG_URING_PATH 00:10:22.631 #undef SPDK_CONFIG_URING_ZNS 00:10:22.631 #undef SPDK_CONFIG_USDT 00:10:22.631 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:22.631 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:22.631 #define SPDK_CONFIG_VFIO_USER 1 00:10:22.631 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:22.631 #define SPDK_CONFIG_VHOST 1 00:10:22.631 #define SPDK_CONFIG_VIRTIO 1 00:10:22.631 #undef SPDK_CONFIG_VTUNE 00:10:22.631 #define SPDK_CONFIG_VTUNE_DIR 00:10:22.631 #define SPDK_CONFIG_WERROR 1 00:10:22.631 #define SPDK_CONFIG_WPDK_DIR 00:10:22.631 #undef SPDK_CONFIG_XNVME 00:10:22.631 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:22.631 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:22.632 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:10:22.633 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:10:22.634 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:10:22.634 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@321 -- # [[ -z 1116862 ]] 00:10:22.634 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@321 -- # kill -0 1116862 00:10:22.634 19:39:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # set_test_storage 2147483648 00:10:22.634 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -v testdir ]] 00:10:22.634 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local requested_size=2147483648 00:10:22.634 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local mount target_dir 00:10:22.634 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local -A mounts fss sizes avails uses 00:10:22.634 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@337 -- # local source fs size avail mount use 00:10:22.634 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # local storage_fallback storage_candidates 00:10:22.634 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # mktemp -udt spdk.XXXXXX 00:10:22.634 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # storage_fallback=/tmp/spdk.pGMFpO 00:10:22.634 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:22.634 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@348 -- # [[ -n '' ]] 00:10:22.634 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@353 -- # [[ -n '' ]] 00:10:22.634 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.pGMFpO/tests/target /tmp/spdk.pGMFpO 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # requested_size=2214592512 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # read -r source fs size use avail _ mount 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # df -T 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # grep -v Filesystem 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # mounts["$mount"]=spdk_devtmpfs 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # fss["$mount"]=devtmpfs 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # avails["$mount"]=67108864 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # sizes["$mount"]=67108864 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # uses["$mount"]=0 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # read -r source fs size use avail _ mount 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # mounts["$mount"]=/dev/pmem0 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # fss["$mount"]=ext2 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # avails["$mount"]=953643008 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # sizes["$mount"]=5284429824 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # uses["$mount"]=4330786816 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # read -r source fs size use avail _ mount 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # mounts["$mount"]=spdk_root 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # fss["$mount"]=overlay 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # avails["$mount"]=55516614656 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # sizes["$mount"]=61994729472 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # uses["$mount"]=6478114816 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # read -r source fs size use avail _ mount 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # mounts["$mount"]=tmpfs 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # fss["$mount"]=tmpfs 00:10:22.893 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # avails["$mount"]=30987444224 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # sizes["$mount"]=30997364736 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # uses["$mount"]=9920512 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # read -r source fs size use avail _ mount 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # mounts["$mount"]=tmpfs 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # fss["$mount"]=tmpfs 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # avails["$mount"]=12376539136 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # sizes["$mount"]=12398948352 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # uses["$mount"]=22409216 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # read -r source fs size use avail _ mount 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # mounts["$mount"]=tmpfs 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # fss["$mount"]=tmpfs 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # avails["$mount"]=30996762624 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # sizes["$mount"]=30997364736 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # uses["$mount"]=602112 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # read -r source fs size use avail _ mount 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # mounts["$mount"]=tmpfs 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # fss["$mount"]=tmpfs 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # avails["$mount"]=6199468032 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # sizes["$mount"]=6199472128 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # uses["$mount"]=4096 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # read -r source fs size use avail _ mount 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # printf '* Looking for test storage...\n' 00:10:22.894 * Looking for test storage... 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # local target_space new_size 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # for target_dir in "${storage_candidates[@]}" 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # mount=/ 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # target_space=55516614656 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space == 0 || target_space < requested_size )) 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( target_space >= requested_size )) 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # [[ overlay == tmpfs ]] 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # [[ overlay == ramfs ]] 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # [[ / == / ]] 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # new_size=8692707328 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # return 0 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # set -o errtrace 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # shopt -s extdebug 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1688 -- # true 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # xtrace_fd 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:22.894 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@452 -- # prepare_net_devs 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # local -g is_hw=no 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # remove_spdk_ns 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # xtrace_disable 00:10:22.895 19:39:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # pci_devs=() 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -a pci_devs 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # pci_net_devs=() 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # pci_drivers=() 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -A pci_drivers 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # net_devs=() 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # local -ga net_devs 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # e810=() 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # local -ga e810 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # x722=() 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # local -ga x722 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # mlx=() 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # local -ga mlx 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:24.795 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:24.795 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # [[ up == up ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:24.795 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # [[ up == up ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:24.795 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # is_hw=yes 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:10:24.795 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:10:25.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:10:25.053 00:10:25.053 --- 10.0.0.2 ping statistics --- 00:10:25.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.053 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:25.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:10:25.053 00:10:25.053 --- 10.0.0.1 ping statistics --- 00:10:25.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.053 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # return 0 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:25.053 ************************************ 00:10:25.053 START TEST nvmf_filesystem_no_in_capsule 00:10:25.053 ************************************ 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # nvmf_filesystem_part 0 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@725 -- # xtrace_disable 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@485 -- # nvmfpid=1118491 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:25.053 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@486 -- # waitforlisten 1118491 00:10:25.054 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # '[' -z 1118491 ']' 00:10:25.054 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.054 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:25.054 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.054 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:25.054 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.054 [2024-07-24 19:39:42.306610] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:10:25.054 [2024-07-24 19:39:42.306684] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.054 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.054 [2024-07-24 19:39:42.373269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:25.311 [2024-07-24 19:39:42.481171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.311 [2024-07-24 19:39:42.481222] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.311 [2024-07-24 19:39:42.481267] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.311 [2024-07-24 19:39:42.481279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.311 [2024-07-24 19:39:42.481289] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.311 [2024-07-24 19:39:42.481342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.311 [2024-07-24 19:39:42.481399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.311 [2024-07-24 19:39:42.481466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:25.311 [2024-07-24 19:39:42.481468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.311 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:25.311 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@865 -- # return 0 00:10:25.311 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:10:25.311 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@731 -- # xtrace_disable 00:10:25.311 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.311 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.311 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:25.311 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:25.311 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:25.311 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.311 [2024-07-24 19:39:42.647809] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.311 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:25.311 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:25.311 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:25.311 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.571 Malloc1 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.571 [2024-07-24 19:39:42.834417] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_name=Malloc1 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_info 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bs 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local nb 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_info='[ 00:10:25.571 { 00:10:25.571 "name": "Malloc1", 00:10:25.571 "aliases": [ 00:10:25.571 "6923d440-ff47-43ef-b6f5-1839065065c1" 00:10:25.571 ], 00:10:25.571 "product_name": "Malloc disk", 00:10:25.571 "block_size": 512, 00:10:25.571 "num_blocks": 1048576, 00:10:25.571 "uuid": "6923d440-ff47-43ef-b6f5-1839065065c1", 00:10:25.571 "assigned_rate_limits": { 00:10:25.571 "rw_ios_per_sec": 0, 00:10:25.571 "rw_mbytes_per_sec": 0, 00:10:25.571 "r_mbytes_per_sec": 0, 00:10:25.571 "w_mbytes_per_sec": 0 00:10:25.571 }, 00:10:25.571 "claimed": true, 00:10:25.571 "claim_type": "exclusive_write", 00:10:25.571 "zoned": false, 00:10:25.571 "supported_io_types": { 00:10:25.571 "read": true, 00:10:25.571 "write": true, 00:10:25.571 "unmap": true, 00:10:25.571 "flush": true, 00:10:25.571 "reset": true, 00:10:25.571 "nvme_admin": false, 00:10:25.571 "nvme_io": false, 00:10:25.571 "nvme_io_md": false, 00:10:25.571 "write_zeroes": true, 00:10:25.571 "zcopy": true, 00:10:25.571 "get_zone_info": false, 00:10:25.571 "zone_management": false, 00:10:25.571 "zone_append": false, 00:10:25.571 "compare": false, 00:10:25.571 "compare_and_write": false, 00:10:25.571 "abort": true, 00:10:25.571 "seek_hole": false, 00:10:25.571 "seek_data": false, 00:10:25.571 "copy": true, 00:10:25.571 "nvme_iov_md": false 00:10:25.571 }, 00:10:25.571 "memory_domains": [ 00:10:25.571 { 00:10:25.571 "dma_device_id": "system", 00:10:25.571 "dma_device_type": 1 00:10:25.571 }, 00:10:25.571 { 00:10:25.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.571 "dma_device_type": 2 00:10:25.571 } 00:10:25.571 ], 00:10:25.571 "driver_specific": {} 00:10:25.571 } 00:10:25.571 ]' 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .block_size' 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bs=512 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .num_blocks' 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # nb=1048576 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # bdev_size=512 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # echo 512 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:25.571 19:39:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:26.185 19:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:26.185 19:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local i=0 00:10:26.185 19:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:10:26.185 19:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # [[ -n '' ]] 00:10:26.185 19:39:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # sleep 2 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # nvme_devices=1 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # return 0 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:28.709 19:39:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:28.967 19:39:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.899 ************************************ 00:10:29.899 START TEST filesystem_ext4 00:10:29.899 ************************************ 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local fstype=ext4 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local dev_name=/dev/nvme0n1p1 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local i=0 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local force 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # '[' ext4 = ext4 ']' 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # force=-F 00:10:29.899 19:39:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@938 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:29.899 mke2fs 1.46.5 (30-Dec-2021) 00:10:30.157 Discarding device blocks: 0/522240 done 00:10:30.157 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:30.157 Filesystem UUID: ccebb890-53c5-427b-9469-7b7da98b74e0 00:10:30.157 Superblock backups stored on blocks: 00:10:30.157 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:30.157 00:10:30.157 Allocating group tables: 0/64 done 00:10:30.157 Writing inode tables: 0/64 done 00:10:33.428 Creating journal (8192 blocks): done 00:10:33.992 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:10:33.992 00:10:33.992 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@946 -- # return 0 00:10:33.992 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:34.250 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:34.250 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:34.250 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:34.250 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:34.250 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:34.250 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:34.250 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1118491 00:10:34.250 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:34.250 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:34.250 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:34.250 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:34.250 00:10:34.250 real 0m4.404s 00:10:34.250 user 0m0.021s 00:10:34.250 sys 0m0.050s 00:10:34.250 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:34.250 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:34.250 ************************************ 00:10:34.250 END TEST filesystem_ext4 00:10:34.250 ************************************ 00:10:34.508 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:34.508 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:10:34.508 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:34.508 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:34.508 ************************************ 00:10:34.508 START TEST filesystem_btrfs 00:10:34.508 ************************************ 00:10:34.508 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:34.508 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:34.508 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:34.508 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:34.508 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local fstype=btrfs 00:10:34.508 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local dev_name=/dev/nvme0n1p1 00:10:34.508 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local i=0 00:10:34.508 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local force 00:10:34.508 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # '[' btrfs = ext4 ']' 00:10:34.508 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # force=-f 00:10:34.508 19:39:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:34.765 btrfs-progs v6.6.2 00:10:34.765 See https://btrfs.readthedocs.io for more information. 00:10:34.765 00:10:34.765 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:34.765 NOTE: several default settings have changed in version 5.15, please make sure 00:10:34.765 this does not affect your deployments: 00:10:34.765 - DUP for metadata (-m dup) 00:10:34.765 - enabled no-holes (-O no-holes) 00:10:34.765 - enabled free-space-tree (-R free-space-tree) 00:10:34.765 00:10:34.765 Label: (null) 00:10:34.765 UUID: 49ceb002-fff4-4a28-bace-30e56db6a4b3 00:10:34.765 Node size: 16384 00:10:34.765 Sector size: 4096 00:10:34.765 Filesystem size: 510.00MiB 00:10:34.765 Block group profiles: 00:10:34.765 Data: single 8.00MiB 00:10:34.765 Metadata: DUP 32.00MiB 00:10:34.765 System: DUP 8.00MiB 00:10:34.765 SSD detected: yes 00:10:34.765 Zoned device: no 00:10:34.765 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:34.765 Runtime features: free-space-tree 00:10:34.765 Checksum: crc32c 00:10:34.765 Number of devices: 1 00:10:34.765 Devices: 00:10:34.765 ID SIZE PATH 00:10:34.765 1 510.00MiB /dev/nvme0n1p1 00:10:34.765 00:10:34.765 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@946 -- # return 0 00:10:34.765 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1118491 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:35.697 00:10:35.697 real 0m1.215s 00:10:35.697 user 0m0.020s 00:10:35.697 sys 0m0.110s 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:35.697 ************************************ 00:10:35.697 END TEST filesystem_btrfs 00:10:35.697 ************************************ 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.697 ************************************ 00:10:35.697 START TEST filesystem_xfs 00:10:35.697 ************************************ 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # nvmf_filesystem_create xfs nvme0n1 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local fstype=xfs 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local dev_name=/dev/nvme0n1p1 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local i=0 00:10:35.697 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local force 00:10:35.698 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # '[' xfs = ext4 ']' 00:10:35.698 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # force=-f 00:10:35.698 19:39:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:35.698 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:35.698 = sectsz=512 attr=2, projid32bit=1 00:10:35.698 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:35.698 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:35.698 data = bsize=4096 blocks=130560, imaxpct=25 00:10:35.698 = sunit=0 swidth=0 blks 00:10:35.698 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:35.698 log =internal log bsize=4096 blocks=16384, version=2 00:10:35.698 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:35.698 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:36.629 Discarding blocks...Done. 00:10:36.629 19:39:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@946 -- # return 0 00:10:36.629 19:39:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:38.076 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:38.076 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:38.076 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:38.076 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:38.076 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:38.076 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:38.333 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1118491 00:10:38.333 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:38.333 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:38.333 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:38.333 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:38.333 00:10:38.333 real 0m2.570s 00:10:38.333 user 0m0.017s 00:10:38.333 sys 0m0.060s 00:10:38.333 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:38.333 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:38.333 ************************************ 00:10:38.333 END TEST filesystem_xfs 00:10:38.333 ************************************ 00:10:38.333 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:38.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # local i=0 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # lsblk -o NAME,SERIAL 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME,SERIAL 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1228 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1232 -- # return 0 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1118491 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' -z 1118491 ']' 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # kill -0 1118491 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # uname 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:38.590 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1118491 00:10:38.591 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:10:38.591 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:10:38.591 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1118491' 00:10:38.591 killing process with pid 1118491 00:10:38.591 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # kill 1118491 00:10:38.591 19:39:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@975 -- # wait 1118491 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:39.157 00:10:39.157 real 0m14.124s 00:10:39.157 user 0m54.086s 00:10:39.157 sys 0m2.005s 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.157 ************************************ 00:10:39.157 END TEST nvmf_filesystem_no_in_capsule 00:10:39.157 ************************************ 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:39.157 ************************************ 00:10:39.157 START TEST nvmf_filesystem_in_capsule 00:10:39.157 ************************************ 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # nvmf_filesystem_part 4096 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@725 -- # xtrace_disable 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@485 -- # nvmfpid=1120428 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@486 -- # waitforlisten 1120428 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # '[' -z 1120428 ']' 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.157 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:39.158 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.158 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:39.158 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.158 [2024-07-24 19:39:56.474431] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:10:39.158 [2024-07-24 19:39:56.474531] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.158 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.415 [2024-07-24 19:39:56.539452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.415 [2024-07-24 19:39:56.650777] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.415 [2024-07-24 19:39:56.650835] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.415 [2024-07-24 19:39:56.650864] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.415 [2024-07-24 19:39:56.650876] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.415 [2024-07-24 19:39:56.650886] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.415 [2024-07-24 19:39:56.650942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.415 [2024-07-24 19:39:56.650974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.415 [2024-07-24 19:39:56.651032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.415 [2024-07-24 19:39:56.651034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.415 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:39.415 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@865 -- # return 0 00:10:39.415 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:10:39.415 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@731 -- # xtrace_disable 00:10:39.415 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.673 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.673 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.674 [2024-07-24 19:39:56.812787] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.674 Malloc1 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.674 [2024-07-24 19:39:56.987325] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_name=Malloc1 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_info 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bs 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local nb 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:39.674 19:39:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.674 19:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:39.674 19:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_info='[ 00:10:39.674 { 00:10:39.674 "name": "Malloc1", 00:10:39.674 "aliases": [ 00:10:39.674 "fc3effe4-7bf2-4dba-84f6-6afbfeba7947" 00:10:39.674 ], 00:10:39.674 "product_name": "Malloc disk", 00:10:39.674 "block_size": 512, 00:10:39.674 "num_blocks": 1048576, 00:10:39.674 "uuid": "fc3effe4-7bf2-4dba-84f6-6afbfeba7947", 00:10:39.674 "assigned_rate_limits": { 00:10:39.674 "rw_ios_per_sec": 0, 00:10:39.674 "rw_mbytes_per_sec": 0, 00:10:39.674 "r_mbytes_per_sec": 0, 00:10:39.674 "w_mbytes_per_sec": 0 00:10:39.674 }, 00:10:39.674 "claimed": true, 00:10:39.674 "claim_type": "exclusive_write", 00:10:39.674 "zoned": false, 00:10:39.674 "supported_io_types": { 00:10:39.674 "read": true, 00:10:39.674 "write": true, 00:10:39.674 "unmap": true, 00:10:39.674 "flush": true, 00:10:39.674 "reset": true, 00:10:39.674 "nvme_admin": false, 00:10:39.674 "nvme_io": false, 00:10:39.674 "nvme_io_md": false, 00:10:39.674 "write_zeroes": true, 00:10:39.674 "zcopy": true, 00:10:39.674 "get_zone_info": false, 00:10:39.674 "zone_management": false, 00:10:39.674 "zone_append": false, 00:10:39.674 "compare": false, 00:10:39.674 "compare_and_write": false, 00:10:39.674 "abort": true, 00:10:39.674 "seek_hole": false, 00:10:39.674 "seek_data": false, 00:10:39.674 "copy": true, 00:10:39.674 "nvme_iov_md": false 00:10:39.674 }, 00:10:39.674 "memory_domains": [ 00:10:39.674 { 00:10:39.674 "dma_device_id": "system", 00:10:39.674 "dma_device_type": 1 00:10:39.674 }, 00:10:39.674 { 00:10:39.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.674 "dma_device_type": 2 00:10:39.674 } 00:10:39.674 ], 00:10:39.674 "driver_specific": {} 00:10:39.674 } 00:10:39.674 ]' 00:10:39.674 19:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .block_size' 00:10:39.674 19:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bs=512 00:10:39.674 19:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .num_blocks' 00:10:39.933 19:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # nb=1048576 00:10:39.933 19:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # bdev_size=512 00:10:39.933 19:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # echo 512 00:10:39.933 19:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:39.933 19:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:40.498 19:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:40.498 19:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local i=0 00:10:40.498 19:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:10:40.498 19:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # [[ -n '' ]] 00:10:40.498 19:39:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # sleep 2 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # nvme_devices=1 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # return 0 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:42.395 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:42.653 19:39:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:43.218 19:40:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:44.590 ************************************ 00:10:44.590 START TEST filesystem_in_capsule_ext4 00:10:44.590 ************************************ 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local fstype=ext4 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local dev_name=/dev/nvme0n1p1 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local i=0 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local force 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # '[' ext4 = ext4 ']' 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # force=-F 00:10:44.590 19:40:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@938 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:44.590 mke2fs 1.46.5 (30-Dec-2021) 00:10:44.590 Discarding device blocks: 0/522240 done 00:10:44.590 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:44.590 Filesystem UUID: eb3df76b-09e1-4a5f-b790-d1114a8c8ee2 00:10:44.590 Superblock backups stored on blocks: 00:10:44.590 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:44.590 00:10:44.590 Allocating group tables: 0/64 done 00:10:44.590 Writing inode tables: 0/64 done 00:10:45.961 Creating journal (8192 blocks): done 00:10:45.961 Writing superblocks and filesystem accounting information: 0/64 done 00:10:45.961 00:10:45.961 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@946 -- # return 0 00:10:45.961 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1120428 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:46.528 00:10:46.528 real 0m2.199s 00:10:46.528 user 0m0.019s 00:10:46.528 sys 0m0.045s 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 ************************************ 00:10:46.528 END TEST filesystem_in_capsule_ext4 00:10:46.528 ************************************ 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:46.528 ************************************ 00:10:46.528 START TEST filesystem_in_capsule_btrfs 00:10:46.528 ************************************ 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local fstype=btrfs 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local dev_name=/dev/nvme0n1p1 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local i=0 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local force 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # '[' btrfs = ext4 ']' 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # force=-f 00:10:46.528 19:40:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:46.785 btrfs-progs v6.6.2 00:10:46.785 See https://btrfs.readthedocs.io for more information. 00:10:46.785 00:10:46.785 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:46.785 NOTE: several default settings have changed in version 5.15, please make sure 00:10:46.785 this does not affect your deployments: 00:10:46.785 - DUP for metadata (-m dup) 00:10:46.785 - enabled no-holes (-O no-holes) 00:10:46.785 - enabled free-space-tree (-R free-space-tree) 00:10:46.785 00:10:46.785 Label: (null) 00:10:46.785 UUID: 4db07a31-383a-467c-8faf-a66b73f385af 00:10:46.785 Node size: 16384 00:10:46.785 Sector size: 4096 00:10:46.785 Filesystem size: 510.00MiB 00:10:46.785 Block group profiles: 00:10:46.785 Data: single 8.00MiB 00:10:46.785 Metadata: DUP 32.00MiB 00:10:46.785 System: DUP 8.00MiB 00:10:46.785 SSD detected: yes 00:10:46.785 Zoned device: no 00:10:46.785 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:46.785 Runtime features: free-space-tree 00:10:46.785 Checksum: crc32c 00:10:46.785 Number of devices: 1 00:10:46.785 Devices: 00:10:46.785 ID SIZE PATH 00:10:46.785 1 510.00MiB /dev/nvme0n1p1 00:10:46.785 00:10:46.785 19:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@946 -- # return 0 00:10:46.785 19:40:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:47.716 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:47.716 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:47.716 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:47.716 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:47.716 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:47.716 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1120428 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:47.974 00:10:47.974 real 0m1.309s 00:10:47.974 user 0m0.005s 00:10:47.974 sys 0m0.131s 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:47.974 ************************************ 00:10:47.974 END TEST filesystem_in_capsule_btrfs 00:10:47.974 ************************************ 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:47.974 ************************************ 00:10:47.974 START TEST filesystem_in_capsule_xfs 00:10:47.974 ************************************ 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # nvmf_filesystem_create xfs nvme0n1 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:47.974 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local fstype=xfs 00:10:47.975 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local dev_name=/dev/nvme0n1p1 00:10:47.975 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local i=0 00:10:47.975 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local force 00:10:47.975 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # '[' xfs = ext4 ']' 00:10:47.975 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # force=-f 00:10:47.975 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:47.975 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:47.975 = sectsz=512 attr=2, projid32bit=1 00:10:47.975 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:47.975 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:47.975 data = bsize=4096 blocks=130560, imaxpct=25 00:10:47.975 = sunit=0 swidth=0 blks 00:10:47.975 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:47.975 log =internal log bsize=4096 blocks=16384, version=2 00:10:47.975 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:47.975 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:48.906 Discarding blocks...Done. 00:10:48.906 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@946 -- # return 0 00:10:48.906 19:40:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:50.802 19:40:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:50.802 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:50.802 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:50.802 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:50.802 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:50.802 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:50.802 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1120428 00:10:50.802 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:50.802 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:50.802 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:50.802 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:50.802 00:10:50.802 real 0m2.908s 00:10:50.802 user 0m0.016s 00:10:50.802 sys 0m0.059s 00:10:50.802 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:50.802 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:50.802 ************************************ 00:10:50.802 END TEST filesystem_in_capsule_xfs 00:10:50.802 ************************************ 00:10:50.802 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:51.060 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:51.060 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:51.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.317 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # local i=0 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # lsblk -o NAME,SERIAL 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME,SERIAL 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1228 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1232 -- # return 0 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1120428 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' -z 1120428 ']' 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # kill -0 1120428 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # uname 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1120428 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1120428' 00:10:51.318 killing process with pid 1120428 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # kill 1120428 00:10:51.318 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@975 -- # wait 1120428 00:10:51.882 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:51.882 00:10:51.882 real 0m12.571s 00:10:51.882 user 0m48.089s 00:10:51.882 sys 0m1.830s 00:10:51.882 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:51.883 19:40:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:51.883 ************************************ 00:10:51.883 END TEST nvmf_filesystem_in_capsule 00:10:51.883 ************************************ 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # nvmfcleanup 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:51.883 rmmod nvme_tcp 00:10:51.883 rmmod nvme_fabrics 00:10:51.883 rmmod nvme_keyring 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # '[' -n '' ']' 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@282 -- # remove_spdk_ns 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.883 19:40:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.785 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:10:53.785 00:10:53.785 real 0m31.233s 00:10:53.785 user 1m43.107s 00:10:53.785 sys 0m5.433s 00:10:53.785 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # xtrace_disable 00:10:53.785 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:53.785 ************************************ 00:10:53.785 END TEST nvmf_filesystem 00:10:53.785 ************************************ 00:10:53.785 19:40:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:53.785 19:40:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:10:53.785 19:40:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:10:53.785 19:40:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:54.043 ************************************ 00:10:54.043 START TEST nvmf_target_discovery 00:10:54.043 ************************************ 00:10:54.043 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:54.043 * Looking for test storage... 00:10:54.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.043 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.043 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:54.043 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.043 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.043 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.043 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.043 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.043 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.043 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.043 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.043 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.043 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.043 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@452 -- # prepare_net_devs 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # local -g is_hw=no 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # remove_spdk_ns 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # xtrace_disable 00:10:54.044 19:40:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:55.944 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.944 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # pci_devs=() 00:10:55.944 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -a pci_devs 00:10:55.944 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # pci_net_devs=() 00:10:55.944 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:10:55.944 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # pci_drivers=() 00:10:55.944 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -A pci_drivers 00:10:55.944 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # net_devs=() 00:10:55.944 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # local -ga net_devs 00:10:55.944 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # e810=() 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # local -ga e810 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # x722=() 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # local -ga x722 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # mlx=() 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # local -ga mlx 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:55.945 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:55.945 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # [[ up == up ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:55.945 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # [[ up == up ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:55.945 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # is_hw=yes 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.945 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:10:56.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:10:56.203 00:10:56.203 --- 10.0.0.2 ping statistics --- 00:10:56.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.203 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:10:56.203 00:10:56.203 --- 10.0.0.1 ping statistics --- 00:10:56.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.203 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # return 0 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@725 -- # xtrace_disable 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@485 -- # nvmfpid=1124029 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@486 -- # waitforlisten 1124029 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@832 -- # '[' -z 1124029 ']' 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local max_retries=100 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@841 -- # xtrace_disable 00:10:56.203 19:40:13 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:56.203 [2024-07-24 19:40:13.453646] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:10:56.203 [2024-07-24 19:40:13.453732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:56.203 EAL: No free 2048 kB hugepages reported on node 1 00:10:56.203 [2024-07-24 19:40:13.524023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.461 [2024-07-24 19:40:13.644931] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:56.461 [2024-07-24 19:40:13.644986] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:56.461 [2024-07-24 19:40:13.645002] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:56.461 [2024-07-24 19:40:13.645015] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:56.461 [2024-07-24 19:40:13.645027] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:56.461 [2024-07-24 19:40:13.645110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.461 [2024-07-24 19:40:13.645168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.461 [2024-07-24 19:40:13.645224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.461 [2024-07-24 19:40:13.645220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@865 -- # return 0 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@731 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 [2024-07-24 19:40:14.470146] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 Null1 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 [2024-07-24 19:40:14.510455] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 Null2 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 Null3 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 Null4 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.394 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.395 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.395 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:57.652 00:10:57.652 Discovery Log Number of Records 6, Generation counter 6 00:10:57.652 =====Discovery Log Entry 0====== 00:10:57.652 trtype: tcp 00:10:57.652 adrfam: ipv4 00:10:57.652 subtype: current discovery subsystem 00:10:57.652 treq: not required 00:10:57.652 portid: 0 00:10:57.652 trsvcid: 4420 00:10:57.652 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:57.652 traddr: 10.0.0.2 00:10:57.652 eflags: explicit discovery connections, duplicate discovery information 00:10:57.652 sectype: none 00:10:57.652 =====Discovery Log Entry 1====== 00:10:57.652 trtype: tcp 00:10:57.652 adrfam: ipv4 00:10:57.652 subtype: nvme subsystem 00:10:57.652 treq: not required 00:10:57.652 portid: 0 00:10:57.652 trsvcid: 4420 00:10:57.652 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:57.652 traddr: 10.0.0.2 00:10:57.652 eflags: none 00:10:57.652 sectype: none 00:10:57.652 =====Discovery Log Entry 2====== 00:10:57.652 trtype: tcp 00:10:57.652 adrfam: ipv4 00:10:57.652 subtype: nvme subsystem 00:10:57.652 treq: not required 00:10:57.652 portid: 0 00:10:57.652 trsvcid: 4420 00:10:57.652 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:57.652 traddr: 10.0.0.2 00:10:57.652 eflags: none 00:10:57.652 sectype: none 00:10:57.652 =====Discovery Log Entry 3====== 00:10:57.652 trtype: tcp 00:10:57.652 adrfam: ipv4 00:10:57.652 subtype: nvme subsystem 00:10:57.652 treq: not required 00:10:57.652 portid: 0 00:10:57.652 trsvcid: 4420 00:10:57.652 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:57.652 traddr: 10.0.0.2 00:10:57.652 eflags: none 00:10:57.652 sectype: none 00:10:57.652 =====Discovery Log Entry 4====== 00:10:57.652 trtype: tcp 00:10:57.652 adrfam: ipv4 00:10:57.652 subtype: nvme subsystem 00:10:57.652 treq: not required 00:10:57.652 portid: 0 00:10:57.652 trsvcid: 4420 00:10:57.652 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:57.652 traddr: 10.0.0.2 00:10:57.652 eflags: none 00:10:57.653 sectype: none 00:10:57.653 =====Discovery Log Entry 5====== 00:10:57.653 trtype: tcp 00:10:57.653 adrfam: ipv4 00:10:57.653 subtype: discovery subsystem referral 00:10:57.653 treq: not required 00:10:57.653 portid: 0 00:10:57.653 trsvcid: 4430 00:10:57.653 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:57.653 traddr: 10.0.0.2 00:10:57.653 eflags: none 00:10:57.653 sectype: none 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:57.653 Perform nvmf subsystem discovery via RPC 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.653 [ 00:10:57.653 { 00:10:57.653 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:57.653 "subtype": "Discovery", 00:10:57.653 "listen_addresses": [ 00:10:57.653 { 00:10:57.653 "trtype": "TCP", 00:10:57.653 "adrfam": "IPv4", 00:10:57.653 "traddr": "10.0.0.2", 00:10:57.653 "trsvcid": "4420" 00:10:57.653 } 00:10:57.653 ], 00:10:57.653 "allow_any_host": true, 00:10:57.653 "hosts": [] 00:10:57.653 }, 00:10:57.653 { 00:10:57.653 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:57.653 "subtype": "NVMe", 00:10:57.653 "listen_addresses": [ 00:10:57.653 { 00:10:57.653 "trtype": "TCP", 00:10:57.653 "adrfam": "IPv4", 00:10:57.653 "traddr": "10.0.0.2", 00:10:57.653 "trsvcid": "4420" 00:10:57.653 } 00:10:57.653 ], 00:10:57.653 "allow_any_host": true, 00:10:57.653 "hosts": [], 00:10:57.653 "serial_number": "SPDK00000000000001", 00:10:57.653 "model_number": "SPDK bdev Controller", 00:10:57.653 "max_namespaces": 32, 00:10:57.653 "min_cntlid": 1, 00:10:57.653 "max_cntlid": 65519, 00:10:57.653 "namespaces": [ 00:10:57.653 { 00:10:57.653 "nsid": 1, 00:10:57.653 "bdev_name": "Null1", 00:10:57.653 "name": "Null1", 00:10:57.653 "nguid": "77B8E1657FBA45CF9065C77312E9820C", 00:10:57.653 "uuid": "77b8e165-7fba-45cf-9065-c77312e9820c" 00:10:57.653 } 00:10:57.653 ] 00:10:57.653 }, 00:10:57.653 { 00:10:57.653 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:57.653 "subtype": "NVMe", 00:10:57.653 "listen_addresses": [ 00:10:57.653 { 00:10:57.653 "trtype": "TCP", 00:10:57.653 "adrfam": "IPv4", 00:10:57.653 "traddr": "10.0.0.2", 00:10:57.653 "trsvcid": "4420" 00:10:57.653 } 00:10:57.653 ], 00:10:57.653 "allow_any_host": true, 00:10:57.653 "hosts": [], 00:10:57.653 "serial_number": "SPDK00000000000002", 00:10:57.653 "model_number": "SPDK bdev Controller", 00:10:57.653 "max_namespaces": 32, 00:10:57.653 "min_cntlid": 1, 00:10:57.653 "max_cntlid": 65519, 00:10:57.653 "namespaces": [ 00:10:57.653 { 00:10:57.653 "nsid": 1, 00:10:57.653 "bdev_name": "Null2", 00:10:57.653 "name": "Null2", 00:10:57.653 "nguid": "590E98FADFA547A69323E2B1BE4F2FDD", 00:10:57.653 "uuid": "590e98fa-dfa5-47a6-9323-e2b1be4f2fdd" 00:10:57.653 } 00:10:57.653 ] 00:10:57.653 }, 00:10:57.653 { 00:10:57.653 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:57.653 "subtype": "NVMe", 00:10:57.653 "listen_addresses": [ 00:10:57.653 { 00:10:57.653 "trtype": "TCP", 00:10:57.653 "adrfam": "IPv4", 00:10:57.653 "traddr": "10.0.0.2", 00:10:57.653 "trsvcid": "4420" 00:10:57.653 } 00:10:57.653 ], 00:10:57.653 "allow_any_host": true, 00:10:57.653 "hosts": [], 00:10:57.653 "serial_number": "SPDK00000000000003", 00:10:57.653 "model_number": "SPDK bdev Controller", 00:10:57.653 "max_namespaces": 32, 00:10:57.653 "min_cntlid": 1, 00:10:57.653 "max_cntlid": 65519, 00:10:57.653 "namespaces": [ 00:10:57.653 { 00:10:57.653 "nsid": 1, 00:10:57.653 "bdev_name": "Null3", 00:10:57.653 "name": "Null3", 00:10:57.653 "nguid": "EFE105AB4FB44DB2B1E97D4C883BEFBD", 00:10:57.653 "uuid": "efe105ab-4fb4-4db2-b1e9-7d4c883befbd" 00:10:57.653 } 00:10:57.653 ] 00:10:57.653 }, 00:10:57.653 { 00:10:57.653 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:57.653 "subtype": "NVMe", 00:10:57.653 "listen_addresses": [ 00:10:57.653 { 00:10:57.653 "trtype": "TCP", 00:10:57.653 "adrfam": "IPv4", 00:10:57.653 "traddr": "10.0.0.2", 00:10:57.653 "trsvcid": "4420" 00:10:57.653 } 00:10:57.653 ], 00:10:57.653 "allow_any_host": true, 00:10:57.653 "hosts": [], 00:10:57.653 "serial_number": "SPDK00000000000004", 00:10:57.653 "model_number": "SPDK bdev Controller", 00:10:57.653 "max_namespaces": 32, 00:10:57.653 "min_cntlid": 1, 00:10:57.653 "max_cntlid": 65519, 00:10:57.653 "namespaces": [ 00:10:57.653 { 00:10:57.653 "nsid": 1, 00:10:57.653 "bdev_name": "Null4", 00:10:57.653 "name": "Null4", 00:10:57.653 "nguid": "952E1A833DFC45AEA9C457D0119A22F8", 00:10:57.653 "uuid": "952e1a83-3dfc-45ae-a9c4-57d0119a22f8" 00:10:57.653 } 00:10:57.653 ] 00:10:57.653 } 00:10:57.653 ] 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:10:57.653 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # nvmfcleanup 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:57.654 rmmod nvme_tcp 00:10:57.654 rmmod nvme_fabrics 00:10:57.654 rmmod nvme_keyring 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # '[' -n 1124029 ']' 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # killprocess 1124029 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' -z 1124029 ']' 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # kill -0 1124029 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # uname 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:10:57.654 19:40:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1124029 00:10:57.654 19:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:10:57.654 19:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:10:57.654 19:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1124029' 00:10:57.654 killing process with pid 1124029 00:10:57.654 19:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # kill 1124029 00:10:57.654 19:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@975 -- # wait 1124029 00:10:58.221 19:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:10:58.221 19:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:10:58.221 19:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:10:58.221 19:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:58.221 19:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@282 -- # remove_spdk_ns 00:10:58.221 19:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.221 19:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.221 19:40:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:11:00.131 00:11:00.131 real 0m6.164s 00:11:00.131 user 0m7.413s 00:11:00.131 sys 0m1.884s 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # xtrace_disable 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:00.131 ************************************ 00:11:00.131 END TEST nvmf_target_discovery 00:11:00.131 ************************************ 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:00.131 ************************************ 00:11:00.131 START TEST nvmf_referrals 00:11:00.131 ************************************ 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:00.131 * Looking for test storage... 00:11:00.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:00.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@452 -- # prepare_net_devs 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # local -g is_hw=no 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # remove_spdk_ns 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # xtrace_disable 00:11:00.131 19:40:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # pci_devs=() 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -a pci_devs 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # pci_net_devs=() 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # pci_drivers=() 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -A pci_drivers 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # net_devs=() 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # local -ga net_devs 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # e810=() 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # local -ga e810 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # x722=() 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # local -ga x722 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # mlx=() 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # local -ga mlx 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:02.734 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:02.734 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # [[ up == up ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:02.734 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # [[ up == up ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:02.734 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # is_hw=yes 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:02.734 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:11:02.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:02.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:11:02.735 00:11:02.735 --- 10.0.0.2 ping statistics --- 00:11:02.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.735 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:02.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:02.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:11:02.735 00:11:02.735 --- 10.0.0.1 ping statistics --- 00:11:02.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:02.735 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # return 0 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@725 -- # xtrace_disable 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@485 -- # nvmfpid=1126126 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@486 -- # waitforlisten 1126126 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@832 -- # '[' -z 1126126 ']' 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local max_retries=100 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@841 -- # xtrace_disable 00:11:02.735 19:40:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.735 [2024-07-24 19:40:19.744109] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:11:02.735 [2024-07-24 19:40:19.744207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:02.735 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.735 [2024-07-24 19:40:19.825046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.735 [2024-07-24 19:40:19.949309] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:02.735 [2024-07-24 19:40:19.949361] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:02.735 [2024-07-24 19:40:19.949386] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:02.735 [2024-07-24 19:40:19.949399] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:02.735 [2024-07-24 19:40:19.949411] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:02.735 [2024-07-24 19:40:19.949467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.735 [2024-07-24 19:40:19.949498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.735 [2024-07-24 19:40:19.949550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:02.735 [2024-07-24 19:40:19.949553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.735 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:11:02.735 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@865 -- # return 0 00:11:02.735 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:11:02.735 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@731 -- # xtrace_disable 00:11:02.735 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.994 [2024-07-24 19:40:20.126812] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.994 [2024-07-24 19:40:20.139065] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:02.994 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:02.995 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:03.252 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:03.253 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:03.510 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:03.511 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:03.511 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:03.511 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:03.511 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:03.511 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:03.511 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:03.511 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:03.511 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:03.511 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:03.768 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:03.768 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:03.768 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:03.768 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:03.768 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:03.768 19:40:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:03.768 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:03.768 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:03.768 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:03.768 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.768 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:03.768 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:03.768 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:03.768 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:03.768 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:03.768 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:03.768 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:03.768 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:03.768 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:04.026 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:04.026 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:04.026 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:04.026 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:04.026 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:04.026 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.026 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:04.027 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:04.027 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:04.027 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:04.027 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:04.027 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:04.027 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:04.027 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.027 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:04.027 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:04.027 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:04.027 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:04.027 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:04.027 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.027 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # nvmfcleanup 00:11:04.285 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:04.544 rmmod nvme_tcp 00:11:04.544 rmmod nvme_fabrics 00:11:04.544 rmmod nvme_keyring 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # '[' -n 1126126 ']' 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # killprocess 1126126 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' -z 1126126 ']' 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # kill -0 1126126 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # uname 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1126126 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1126126' 00:11:04.544 killing process with pid 1126126 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # kill 1126126 00:11:04.544 19:40:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@975 -- # wait 1126126 00:11:04.802 19:40:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:11:04.802 19:40:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:11:04.802 19:40:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:11:04.802 19:40:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.802 19:40:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@282 -- # remove_spdk_ns 00:11:04.802 19:40:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.802 19:40:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.802 19:40:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.708 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:11:06.708 00:11:06.708 real 0m6.705s 00:11:06.708 user 0m9.600s 00:11:06.708 sys 0m2.173s 00:11:06.708 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # xtrace_disable 00:11:06.708 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:06.708 ************************************ 00:11:06.708 END TEST nvmf_referrals 00:11:06.708 ************************************ 00:11:06.967 19:40:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:06.967 19:40:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:11:06.967 19:40:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:11:06.967 19:40:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:06.968 ************************************ 00:11:06.968 START TEST nvmf_connect_disconnect 00:11:06.968 ************************************ 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:06.968 * Looking for test storage... 00:11:06.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@452 -- # prepare_net_devs 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # local -g is_hw=no 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # remove_spdk_ns 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # xtrace_disable 00:11:06.968 19:40:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # pci_devs=() 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -a pci_devs 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # pci_net_devs=() 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # pci_drivers=() 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -A pci_drivers 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # net_devs=() 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # local -ga net_devs 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # e810=() 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # local -ga e810 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # x722=() 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # local -ga x722 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # mlx=() 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # local -ga mlx 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.871 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:08.872 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:08.872 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # [[ up == up ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:08.872 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # [[ up == up ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:08.872 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # is_hw=yes 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:11:08.872 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:11:09.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:11:09.131 00:11:09.131 --- 10.0.0.2 ping statistics --- 00:11:09.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.131 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:11:09.131 00:11:09.131 --- 10.0.0.1 ping statistics --- 00:11:09.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.131 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # return 0 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@725 -- # xtrace_disable 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@485 -- # nvmfpid=1128411 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@486 -- # waitforlisten 1128411 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # '[' -z 1128411 ']' 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local max_retries=100 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@841 -- # xtrace_disable 00:11:09.131 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:09.131 [2024-07-24 19:40:26.361459] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:11:09.131 [2024-07-24 19:40:26.361557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.131 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.131 [2024-07-24 19:40:26.435427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.389 [2024-07-24 19:40:26.557535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.389 [2024-07-24 19:40:26.557597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.389 [2024-07-24 19:40:26.557614] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.389 [2024-07-24 19:40:26.557628] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.389 [2024-07-24 19:40:26.557640] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.389 [2024-07-24 19:40:26.557703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.389 [2024-07-24 19:40:26.557733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.389 [2024-07-24 19:40:26.557786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.389 [2024-07-24 19:40:26.557789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@865 -- # return 0 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@731 -- # xtrace_disable 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:09.389 [2024-07-24 19:40:26.722770] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:09.389 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:09.646 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:09.646 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.646 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:09.646 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:09.646 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:09.646 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.646 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:09.646 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:09.646 [2024-07-24 19:40:26.784190] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.646 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:09.646 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:09.646 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:09.646 19:40:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:12.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.082 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:23.082 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:23.082 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # nvmfcleanup 00:11:23.082 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:11:23.082 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:23.083 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:11:23.083 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:23.083 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:23.083 rmmod nvme_tcp 00:11:23.083 rmmod nvme_fabrics 00:11:23.083 rmmod nvme_keyring 00:11:23.340 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:23.340 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:11:23.341 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:11:23.341 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # '[' -n 1128411 ']' 00:11:23.341 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # killprocess 1128411 00:11:23.341 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' -z 1128411 ']' 00:11:23.341 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # kill -0 1128411 00:11:23.341 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # uname 00:11:23.341 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:11:23.341 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1128411 00:11:23.341 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:11:23.341 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:11:23.341 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1128411' 00:11:23.341 killing process with pid 1128411 00:11:23.341 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # kill 1128411 00:11:23.341 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@975 -- # wait 1128411 00:11:23.600 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:11:23.600 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:11:23.600 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:11:23.600 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:23.600 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@282 -- # remove_spdk_ns 00:11:23.600 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.600 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.600 19:40:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.556 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:11:25.556 00:11:25.556 real 0m18.699s 00:11:25.556 user 0m56.358s 00:11:25.556 sys 0m3.120s 00:11:25.556 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # xtrace_disable 00:11:25.556 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:25.556 ************************************ 00:11:25.556 END TEST nvmf_connect_disconnect 00:11:25.556 ************************************ 00:11:25.556 19:40:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:25.556 19:40:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:11:25.556 19:40:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:11:25.556 19:40:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:25.556 ************************************ 00:11:25.556 START TEST nvmf_multitarget 00:11:25.556 ************************************ 00:11:25.556 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:25.814 * Looking for test storage... 00:11:25.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.814 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:25.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@452 -- # prepare_net_devs 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # local -g is_hw=no 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # remove_spdk_ns 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # xtrace_disable 00:11:25.815 19:40:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # pci_devs=() 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -a pci_devs 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # pci_net_devs=() 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # pci_drivers=() 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -A pci_drivers 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # net_devs=() 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # local -ga net_devs 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # e810=() 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # local -ga e810 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # x722=() 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # local -ga x722 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # mlx=() 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # local -ga mlx 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.713 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:27.714 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:27.714 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # [[ up == up ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:27.714 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # [[ up == up ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:27.714 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # is_hw=yes 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:11:27.714 19:40:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:11:27.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:11:27.714 00:11:27.714 --- 10.0.0.2 ping statistics --- 00:11:27.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.714 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:11:27.714 00:11:27.714 --- 10.0.0.1 ping statistics --- 00:11:27.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.714 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # return 0 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@725 -- # xtrace_disable 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@485 -- # nvmfpid=1132181 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@486 -- # waitforlisten 1132181 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@832 -- # '[' -z 1132181 ']' 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local max_retries=100 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@841 -- # xtrace_disable 00:11:27.714 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:27.972 [2024-07-24 19:40:45.118603] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:11:27.972 [2024-07-24 19:40:45.118704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.972 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.972 [2024-07-24 19:40:45.197575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.972 [2024-07-24 19:40:45.333580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.972 [2024-07-24 19:40:45.333645] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.972 [2024-07-24 19:40:45.333685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.972 [2024-07-24 19:40:45.333707] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.972 [2024-07-24 19:40:45.333725] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.972 [2024-07-24 19:40:45.333847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.972 [2024-07-24 19:40:45.333913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.972 [2024-07-24 19:40:45.333989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.972 [2024-07-24 19:40:45.333979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.229 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:11:28.229 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@865 -- # return 0 00:11:28.230 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:11:28.230 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@731 -- # xtrace_disable 00:11:28.230 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:28.230 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.230 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:28.230 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:28.230 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:28.487 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:28.487 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:28.487 "nvmf_tgt_1" 00:11:28.487 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:28.487 "nvmf_tgt_2" 00:11:28.487 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:28.487 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:28.744 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:28.745 19:40:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:28.745 true 00:11:28.745 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:29.002 true 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # nvmfcleanup 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:29.002 rmmod nvme_tcp 00:11:29.002 rmmod nvme_fabrics 00:11:29.002 rmmod nvme_keyring 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # '[' -n 1132181 ']' 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # killprocess 1132181 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' -z 1132181 ']' 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # kill -0 1132181 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # uname 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1132181 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1132181' 00:11:29.002 killing process with pid 1132181 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # kill 1132181 00:11:29.002 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@975 -- # wait 1132181 00:11:29.260 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:11:29.260 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:11:29.260 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:11:29.260 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.260 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@282 -- # remove_spdk_ns 00:11:29.260 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.260 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.260 19:40:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:11:31.791 00:11:31.791 real 0m5.786s 00:11:31.791 user 0m6.654s 00:11:31.791 sys 0m1.903s 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # xtrace_disable 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:31.791 ************************************ 00:11:31.791 END TEST nvmf_multitarget 00:11:31.791 ************************************ 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:31.791 ************************************ 00:11:31.791 START TEST nvmf_rpc 00:11:31.791 ************************************ 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:31.791 * Looking for test storage... 00:11:31.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.791 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:31.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@452 -- # prepare_net_devs 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # local -g is_hw=no 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # remove_spdk_ns 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # xtrace_disable 00:11:31.792 19:40:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.694 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.694 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # pci_devs=() 00:11:33.694 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -a pci_devs 00:11:33.694 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # pci_net_devs=() 00:11:33.694 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:11:33.694 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # pci_drivers=() 00:11:33.694 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -A pci_drivers 00:11:33.694 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # net_devs=() 00:11:33.694 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # local -ga net_devs 00:11:33.694 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # e810=() 00:11:33.694 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # local -ga e810 00:11:33.694 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # x722=() 00:11:33.694 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # local -ga x722 00:11:33.694 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # mlx=() 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # local -ga mlx 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:33.695 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:33.695 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # [[ up == up ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:33.695 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # [[ up == up ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:33.695 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # is_hw=yes 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:11:33.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:11:33.695 00:11:33.695 --- 10.0.0.2 ping statistics --- 00:11:33.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.695 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:11:33.695 00:11:33.695 --- 10.0.0.1 ping statistics --- 00:11:33.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.695 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # return 0 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@725 -- # xtrace_disable 00:11:33.695 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.696 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@485 -- # nvmfpid=1134277 00:11:33.696 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.696 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@486 -- # waitforlisten 1134277 00:11:33.696 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@832 -- # '[' -z 1134277 ']' 00:11:33.696 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.696 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local max_retries=100 00:11:33.696 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.696 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@841 -- # xtrace_disable 00:11:33.696 19:40:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.696 [2024-07-24 19:40:50.826936] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:11:33.696 [2024-07-24 19:40:50.827036] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.696 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.696 [2024-07-24 19:40:50.892822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.696 [2024-07-24 19:40:51.003462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.696 [2024-07-24 19:40:51.003518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.696 [2024-07-24 19:40:51.003547] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.696 [2024-07-24 19:40:51.003559] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.696 [2024-07-24 19:40:51.003569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.696 [2024-07-24 19:40:51.003631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.696 [2024-07-24 19:40:51.003725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.696 [2024-07-24 19:40:51.003775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.696 [2024-07-24 19:40:51.003772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@865 -- # return 0 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@731 -- # xtrace_disable 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:33.955 "tick_rate": 2700000000, 00:11:33.955 "poll_groups": [ 00:11:33.955 { 00:11:33.955 "name": "nvmf_tgt_poll_group_000", 00:11:33.955 "admin_qpairs": 0, 00:11:33.955 "io_qpairs": 0, 00:11:33.955 "current_admin_qpairs": 0, 00:11:33.955 "current_io_qpairs": 0, 00:11:33.955 "pending_bdev_io": 0, 00:11:33.955 "completed_nvme_io": 0, 00:11:33.955 "transports": [] 00:11:33.955 }, 00:11:33.955 { 00:11:33.955 "name": "nvmf_tgt_poll_group_001", 00:11:33.955 "admin_qpairs": 0, 00:11:33.955 "io_qpairs": 0, 00:11:33.955 "current_admin_qpairs": 0, 00:11:33.955 "current_io_qpairs": 0, 00:11:33.955 "pending_bdev_io": 0, 00:11:33.955 "completed_nvme_io": 0, 00:11:33.955 "transports": [] 00:11:33.955 }, 00:11:33.955 { 00:11:33.955 "name": "nvmf_tgt_poll_group_002", 00:11:33.955 "admin_qpairs": 0, 00:11:33.955 "io_qpairs": 0, 00:11:33.955 "current_admin_qpairs": 0, 00:11:33.955 "current_io_qpairs": 0, 00:11:33.955 "pending_bdev_io": 0, 00:11:33.955 "completed_nvme_io": 0, 00:11:33.955 "transports": [] 00:11:33.955 }, 00:11:33.955 { 00:11:33.955 "name": "nvmf_tgt_poll_group_003", 00:11:33.955 "admin_qpairs": 0, 00:11:33.955 "io_qpairs": 0, 00:11:33.955 "current_admin_qpairs": 0, 00:11:33.955 "current_io_qpairs": 0, 00:11:33.955 "pending_bdev_io": 0, 00:11:33.955 "completed_nvme_io": 0, 00:11:33.955 "transports": [] 00:11:33.955 } 00:11:33.955 ] 00:11:33.955 }' 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.955 [2024-07-24 19:40:51.263002] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:33.955 "tick_rate": 2700000000, 00:11:33.955 "poll_groups": [ 00:11:33.955 { 00:11:33.955 "name": "nvmf_tgt_poll_group_000", 00:11:33.955 "admin_qpairs": 0, 00:11:33.955 "io_qpairs": 0, 00:11:33.955 "current_admin_qpairs": 0, 00:11:33.955 "current_io_qpairs": 0, 00:11:33.955 "pending_bdev_io": 0, 00:11:33.955 "completed_nvme_io": 0, 00:11:33.955 "transports": [ 00:11:33.955 { 00:11:33.955 "trtype": "TCP" 00:11:33.955 } 00:11:33.955 ] 00:11:33.955 }, 00:11:33.955 { 00:11:33.955 "name": "nvmf_tgt_poll_group_001", 00:11:33.955 "admin_qpairs": 0, 00:11:33.955 "io_qpairs": 0, 00:11:33.955 "current_admin_qpairs": 0, 00:11:33.955 "current_io_qpairs": 0, 00:11:33.955 "pending_bdev_io": 0, 00:11:33.955 "completed_nvme_io": 0, 00:11:33.955 "transports": [ 00:11:33.955 { 00:11:33.955 "trtype": "TCP" 00:11:33.955 } 00:11:33.955 ] 00:11:33.955 }, 00:11:33.955 { 00:11:33.955 "name": "nvmf_tgt_poll_group_002", 00:11:33.955 "admin_qpairs": 0, 00:11:33.955 "io_qpairs": 0, 00:11:33.955 "current_admin_qpairs": 0, 00:11:33.955 "current_io_qpairs": 0, 00:11:33.955 "pending_bdev_io": 0, 00:11:33.955 "completed_nvme_io": 0, 00:11:33.955 "transports": [ 00:11:33.955 { 00:11:33.955 "trtype": "TCP" 00:11:33.955 } 00:11:33.955 ] 00:11:33.955 }, 00:11:33.955 { 00:11:33.955 "name": "nvmf_tgt_poll_group_003", 00:11:33.955 "admin_qpairs": 0, 00:11:33.955 "io_qpairs": 0, 00:11:33.955 "current_admin_qpairs": 0, 00:11:33.955 "current_io_qpairs": 0, 00:11:33.955 "pending_bdev_io": 0, 00:11:33.955 "completed_nvme_io": 0, 00:11:33.955 "transports": [ 00:11:33.955 { 00:11:33.955 "trtype": "TCP" 00:11:33.955 } 00:11:33.955 ] 00:11:33.955 } 00:11:33.955 ] 00:11:33.955 }' 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:33.955 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.213 Malloc1 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:34.213 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.214 [2024-07-24 19:40:51.411909] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # local es=0 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@639 -- # local arg=nvme 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@643 -- # type -t nvme 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@645 -- # type -P nvme 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@645 -- # arg=/usr/sbin/nvme 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@645 -- # [[ -x /usr/sbin/nvme ]] 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:34.214 [2024-07-24 19:40:51.434378] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:34.214 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:34.214 could not add new controller: failed to write to nvme-fabrics device 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # es=1 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:34.214 19:40:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.779 19:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.779 19:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local i=0 00:11:34.779 19:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.779 19:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # [[ -n '' ]] 00:11:34.779 19:40:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # sleep 2 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # nvme_devices=1 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # return 0 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # local i=0 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -o NAME,SERIAL 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME,SERIAL 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1228 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1232 -- # return 0 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # local es=0 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@639 -- # local arg=nvme 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@643 -- # type -t nvme 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@645 -- # type -P nvme 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@645 -- # arg=/usr/sbin/nvme 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@645 -- # [[ -x /usr/sbin/nvme ]] 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.303 [2024-07-24 19:40:54.174673] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:37.303 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:37.303 could not add new controller: failed to write to nvme-fabrics device 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # es=1 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:37.303 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.562 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.562 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local i=0 00:11:37.562 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.562 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # [[ -n '' ]] 00:11:37.562 19:40:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # sleep 2 00:11:39.460 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:11:39.460 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:11:39.460 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.460 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # nvme_devices=1 00:11:39.460 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.460 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # return 0 00:11:39.460 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # local i=0 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -o NAME,SERIAL 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME,SERIAL 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1228 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1232 -- # return 0 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.717 [2024-07-24 19:40:56.962081] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:39.717 19:40:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.648 19:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.648 19:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local i=0 00:11:40.648 19:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.648 19:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # [[ -n '' ]] 00:11:40.648 19:40:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # sleep 2 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # nvme_devices=1 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # return 0 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # local i=0 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -o NAME,SERIAL 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME,SERIAL 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1228 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1232 -- # return 0 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.542 [2024-07-24 19:40:59.795840] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:42.542 19:40:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:43.491 19:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:43.491 19:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local i=0 00:11:43.491 19:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.491 19:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # [[ -n '' ]] 00:11:43.491 19:41:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # sleep 2 00:11:45.385 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:11:45.385 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:11:45.385 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.385 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # nvme_devices=1 00:11:45.385 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.385 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # return 0 00:11:45.385 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.385 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:45.385 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # local i=0 00:11:45.385 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -o NAME,SERIAL 00:11:45.385 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME,SERIAL 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1228 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1232 -- # return 0 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.386 [2024-07-24 19:41:02.624894] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:45.386 19:41:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.951 19:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.951 19:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local i=0 00:11:45.951 19:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.951 19:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # [[ -n '' ]] 00:11:45.951 19:41:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # sleep 2 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # nvme_devices=1 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # return 0 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.475 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # local i=0 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -o NAME,SERIAL 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME,SERIAL 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1228 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1232 -- # return 0 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.475 [2024-07-24 19:41:05.396944] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:48.475 19:41:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.732 19:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.732 19:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local i=0 00:11:48.732 19:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.732 19:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # [[ -n '' ]] 00:11:48.732 19:41:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # sleep 2 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # nvme_devices=1 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # return 0 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # local i=0 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -o NAME,SERIAL 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME,SERIAL 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1228 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1232 -- # return 0 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.254 [2024-07-24 19:41:08.160984] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:51.254 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:51.511 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:51.511 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local i=0 00:11:51.511 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.511 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # [[ -n '' ]] 00:11:51.511 19:41:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # sleep 2 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # nvme_devices=1 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # return 0 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # local i=0 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -o NAME,SERIAL 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME,SERIAL 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1228 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1232 -- # return 0 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.038 [2024-07-24 19:41:10.956865] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.038 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 [2024-07-24 19:41:11.004944] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 [2024-07-24 19:41:11.053094] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 [2024-07-24 19:41:11.101258] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.039 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.040 [2024-07-24 19:41:11.149439] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@562 -- # xtrace_disable 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:54.040 "tick_rate": 2700000000, 00:11:54.040 "poll_groups": [ 00:11:54.040 { 00:11:54.040 "name": "nvmf_tgt_poll_group_000", 00:11:54.040 "admin_qpairs": 2, 00:11:54.040 "io_qpairs": 84, 00:11:54.040 "current_admin_qpairs": 0, 00:11:54.040 "current_io_qpairs": 0, 00:11:54.040 "pending_bdev_io": 0, 00:11:54.040 "completed_nvme_io": 184, 00:11:54.040 "transports": [ 00:11:54.040 { 00:11:54.040 "trtype": "TCP" 00:11:54.040 } 00:11:54.040 ] 00:11:54.040 }, 00:11:54.040 { 00:11:54.040 "name": "nvmf_tgt_poll_group_001", 00:11:54.040 "admin_qpairs": 2, 00:11:54.040 "io_qpairs": 84, 00:11:54.040 "current_admin_qpairs": 0, 00:11:54.040 "current_io_qpairs": 0, 00:11:54.040 "pending_bdev_io": 0, 00:11:54.040 "completed_nvme_io": 156, 00:11:54.040 "transports": [ 00:11:54.040 { 00:11:54.040 "trtype": "TCP" 00:11:54.040 } 00:11:54.040 ] 00:11:54.040 }, 00:11:54.040 { 00:11:54.040 "name": "nvmf_tgt_poll_group_002", 00:11:54.040 "admin_qpairs": 1, 00:11:54.040 "io_qpairs": 84, 00:11:54.040 "current_admin_qpairs": 0, 00:11:54.040 "current_io_qpairs": 0, 00:11:54.040 "pending_bdev_io": 0, 00:11:54.040 "completed_nvme_io": 262, 00:11:54.040 "transports": [ 00:11:54.040 { 00:11:54.040 "trtype": "TCP" 00:11:54.040 } 00:11:54.040 ] 00:11:54.040 }, 00:11:54.040 { 00:11:54.040 "name": "nvmf_tgt_poll_group_003", 00:11:54.040 "admin_qpairs": 2, 00:11:54.040 "io_qpairs": 84, 00:11:54.040 "current_admin_qpairs": 0, 00:11:54.040 "current_io_qpairs": 0, 00:11:54.040 "pending_bdev_io": 0, 00:11:54.040 "completed_nvme_io": 84, 00:11:54.040 "transports": [ 00:11:54.040 { 00:11:54.040 "trtype": "TCP" 00:11:54.040 } 00:11:54.040 ] 00:11:54.040 } 00:11:54.040 ] 00:11:54.040 }' 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # nvmfcleanup 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.040 rmmod nvme_tcp 00:11:54.040 rmmod nvme_fabrics 00:11:54.040 rmmod nvme_keyring 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # '[' -n 1134277 ']' 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # killprocess 1134277 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' -z 1134277 ']' 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # kill -0 1134277 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # uname 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1134277 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1134277' 00:11:54.040 killing process with pid 1134277 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # kill 1134277 00:11:54.040 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@975 -- # wait 1134277 00:11:54.606 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:11:54.606 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:11:54.606 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:11:54.606 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:54.606 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@282 -- # remove_spdk_ns 00:11:54.606 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.606 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.606 19:41:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.509 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:11:56.509 00:11:56.509 real 0m25.019s 00:11:56.509 user 1m21.640s 00:11:56.509 sys 0m3.991s 00:11:56.509 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:11:56.509 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:56.509 ************************************ 00:11:56.509 END TEST nvmf_rpc 00:11:56.509 ************************************ 00:11:56.509 19:41:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:56.509 19:41:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:11:56.509 19:41:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:11:56.509 19:41:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:56.509 ************************************ 00:11:56.509 START TEST nvmf_invalid 00:11:56.509 ************************************ 00:11:56.509 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:56.509 * Looking for test storage... 00:11:56.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.509 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.509 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:56.509 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@452 -- # prepare_net_devs 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # local -g is_hw=no 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # remove_spdk_ns 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # xtrace_disable 00:11:56.510 19:41:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # pci_devs=() 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -a pci_devs 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # pci_net_devs=() 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # pci_drivers=() 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -A pci_drivers 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # net_devs=() 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # local -ga net_devs 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # e810=() 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # local -ga e810 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # x722=() 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # local -ga x722 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # mlx=() 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # local -ga mlx 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:59.038 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:59.038 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # [[ up == up ]] 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:59.038 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:59.038 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # [[ up == up ]] 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:59.039 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # is_hw=yes 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:11:59.039 19:41:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:11:59.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:11:59.039 00:11:59.039 --- 10.0.0.2 ping statistics --- 00:11:59.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.039 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:11:59.039 00:11:59.039 --- 10.0.0.1 ping statistics --- 00:11:59.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.039 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # return 0 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@725 -- # xtrace_disable 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@485 -- # nvmfpid=1138764 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@486 -- # waitforlisten 1138764 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@832 -- # '[' -z 1138764 ']' 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local max_retries=100 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@841 -- # xtrace_disable 00:11:59.039 19:41:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:59.039 [2024-07-24 19:41:16.115462] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:11:59.039 [2024-07-24 19:41:16.115546] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.039 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.039 [2024-07-24 19:41:16.185003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.039 [2024-07-24 19:41:16.305821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.039 [2024-07-24 19:41:16.305887] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.039 [2024-07-24 19:41:16.305903] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.039 [2024-07-24 19:41:16.305917] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.039 [2024-07-24 19:41:16.305928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.039 [2024-07-24 19:41:16.305992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.039 [2024-07-24 19:41:16.306046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.039 [2024-07-24 19:41:16.306101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.039 [2024-07-24 19:41:16.306098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.970 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:11:59.970 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@865 -- # return 0 00:11:59.970 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:11:59.970 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@731 -- # xtrace_disable 00:11:59.970 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:59.970 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:59.970 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:59.970 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode1640 00:11:59.970 [2024-07-24 19:41:17.293544] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:59.970 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:59.970 { 00:11:59.970 "nqn": "nqn.2016-06.io.spdk:cnode1640", 00:11:59.970 "tgt_name": "foobar", 00:11:59.970 "method": "nvmf_create_subsystem", 00:11:59.970 "req_id": 1 00:11:59.970 } 00:11:59.970 Got JSON-RPC error response 00:11:59.970 response: 00:11:59.970 { 00:11:59.970 "code": -32603, 00:11:59.970 "message": "Unable to find target foobar" 00:11:59.970 }' 00:11:59.970 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:59.970 { 00:11:59.970 "nqn": "nqn.2016-06.io.spdk:cnode1640", 00:11:59.970 "tgt_name": "foobar", 00:11:59.970 "method": "nvmf_create_subsystem", 00:11:59.970 "req_id": 1 00:11:59.970 } 00:11:59.970 Got JSON-RPC error response 00:11:59.970 response: 00:11:59.970 { 00:11:59.970 "code": -32603, 00:11:59.970 "message": "Unable to find target foobar" 00:11:59.970 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:59.970 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:59.970 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode116 00:12:00.228 [2024-07-24 19:41:17.590625] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode116: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:00.485 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:00.485 { 00:12:00.485 "nqn": "nqn.2016-06.io.spdk:cnode116", 00:12:00.485 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:00.485 "method": "nvmf_create_subsystem", 00:12:00.485 "req_id": 1 00:12:00.485 } 00:12:00.485 Got JSON-RPC error response 00:12:00.485 response: 00:12:00.485 { 00:12:00.485 "code": -32602, 00:12:00.485 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:00.485 }' 00:12:00.485 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:00.485 { 00:12:00.485 "nqn": "nqn.2016-06.io.spdk:cnode116", 00:12:00.485 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:00.485 "method": "nvmf_create_subsystem", 00:12:00.485 "req_id": 1 00:12:00.485 } 00:12:00.485 Got JSON-RPC error response 00:12:00.485 response: 00:12:00.485 { 00:12:00.485 "code": -32602, 00:12:00.485 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:00.485 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:00.485 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:00.485 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14730 00:12:00.744 [2024-07-24 19:41:17.867434] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14730: invalid model number 'SPDK_Controller' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:00.744 { 00:12:00.744 "nqn": "nqn.2016-06.io.spdk:cnode14730", 00:12:00.744 "model_number": "SPDK_Controller\u001f", 00:12:00.744 "method": "nvmf_create_subsystem", 00:12:00.744 "req_id": 1 00:12:00.744 } 00:12:00.744 Got JSON-RPC error response 00:12:00.744 response: 00:12:00.744 { 00:12:00.744 "code": -32602, 00:12:00.744 "message": "Invalid MN SPDK_Controller\u001f" 00:12:00.744 }' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:00.744 { 00:12:00.744 "nqn": "nqn.2016-06.io.spdk:cnode14730", 00:12:00.744 "model_number": "SPDK_Controller\u001f", 00:12:00.744 "method": "nvmf_create_subsystem", 00:12:00.744 "req_id": 1 00:12:00.744 } 00:12:00.744 Got JSON-RPC error response 00:12:00.744 response: 00:12:00.744 { 00:12:00.744 "code": -32602, 00:12:00.744 "message": "Invalid MN SPDK_Controller\u001f" 00:12:00.744 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:00.744 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ = == \- ]] 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '=qw4PWLP]v~O5Q.KBvnmt' 00:12:00.745 19:41:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '=qw4PWLP]v~O5Q.KBvnmt' nqn.2016-06.io.spdk:cnode20221 00:12:01.004 [2024-07-24 19:41:18.184500] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20221: invalid serial number '=qw4PWLP]v~O5Q.KBvnmt' 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:01.004 { 00:12:01.004 "nqn": "nqn.2016-06.io.spdk:cnode20221", 00:12:01.004 "serial_number": "=qw4PWLP]v~O5Q.KBvnmt", 00:12:01.004 "method": "nvmf_create_subsystem", 00:12:01.004 "req_id": 1 00:12:01.004 } 00:12:01.004 Got JSON-RPC error response 00:12:01.004 response: 00:12:01.004 { 00:12:01.004 "code": -32602, 00:12:01.004 "message": "Invalid SN =qw4PWLP]v~O5Q.KBvnmt" 00:12:01.004 }' 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:01.004 { 00:12:01.004 "nqn": "nqn.2016-06.io.spdk:cnode20221", 00:12:01.004 "serial_number": "=qw4PWLP]v~O5Q.KBvnmt", 00:12:01.004 "method": "nvmf_create_subsystem", 00:12:01.004 "req_id": 1 00:12:01.004 } 00:12:01.004 Got JSON-RPC error response 00:12:01.004 response: 00:12:01.004 { 00:12:01.004 "code": -32602, 00:12:01.004 "message": "Invalid SN =qw4PWLP]v~O5Q.KBvnmt" 00:12:01.004 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.004 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:01.005 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ P == \- ]] 00:12:01.006 19:41:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Pj bXYDqw1>rNnpA7"-(rNnpA7"-(rNnpA7"-(rNnpA7\"-(rNnpA7\"-( /dev/null' 00:12:03.877 19:41:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:05.779 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:12:05.779 00:12:05.779 real 0m9.294s 00:12:05.779 user 0m22.578s 00:12:05.779 sys 0m2.439s 00:12:05.779 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # xtrace_disable 00:12:05.779 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:05.779 ************************************ 00:12:05.779 END TEST nvmf_invalid 00:12:05.779 ************************************ 00:12:05.779 19:41:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:05.779 19:41:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:12:05.779 19:41:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:12:05.779 19:41:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:05.779 ************************************ 00:12:05.779 START TEST nvmf_connect_stress 00:12:05.779 ************************************ 00:12:05.779 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:06.038 * Looking for test storage... 00:12:06.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.038 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:06.039 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@452 -- # prepare_net_devs 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # local -g is_hw=no 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # remove_spdk_ns 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # xtrace_disable 00:12:06.039 19:41:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # pci_devs=() 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -a pci_devs 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # pci_net_devs=() 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # pci_drivers=() 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -A pci_drivers 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # net_devs=() 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # local -ga net_devs 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # e810=() 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # local -ga e810 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # x722=() 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # local -ga x722 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # mlx=() 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # local -ga mlx 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:07.938 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:07.938 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # [[ up == up ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:07.938 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # [[ up == up ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:07.938 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # is_hw=yes 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.938 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:12:07.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:12:07.939 00:12:07.939 --- 10.0.0.2 ping statistics --- 00:12:07.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.939 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:12:07.939 00:12:07.939 --- 10.0.0.1 ping statistics --- 00:12:07.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.939 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # return 0 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@725 -- # xtrace_disable 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@485 -- # nvmfpid=1141402 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@486 -- # waitforlisten 1141402 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@832 -- # '[' -z 1141402 ']' 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local max_retries=100 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@841 -- # xtrace_disable 00:12:07.939 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.197 [2024-07-24 19:41:25.338631] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:12:08.197 [2024-07-24 19:41:25.338705] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.197 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.197 [2024-07-24 19:41:25.400971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:08.197 [2024-07-24 19:41:25.511557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.197 [2024-07-24 19:41:25.511620] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.197 [2024-07-24 19:41:25.511648] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.197 [2024-07-24 19:41:25.511659] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.197 [2024-07-24 19:41:25.511669] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.197 [2024-07-24 19:41:25.511759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.197 [2024-07-24 19:41:25.511821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.197 [2024-07-24 19:41:25.511824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@865 -- # return 0 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@731 -- # xtrace_disable 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.456 [2024-07-24 19:41:25.664145] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.456 [2024-07-24 19:41:25.691373] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.456 NULL1 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1141424 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.456 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:08.457 19:41:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.715 19:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:08.715 19:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:08.715 19:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.715 19:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:08.715 19:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.280 19:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:09.280 19:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:09.280 19:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.280 19:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:09.280 19:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.538 19:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:09.538 19:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:09.538 19:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.538 19:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:09.538 19:41:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.795 19:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:09.795 19:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:09.795 19:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.795 19:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:09.795 19:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.052 19:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:10.052 19:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:10.052 19:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.052 19:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:10.052 19:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.310 19:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:10.310 19:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:10.310 19:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.310 19:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:10.310 19:41:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.874 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:10.874 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:10.874 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.874 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:10.874 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.132 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:11.132 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:11.132 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.132 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:11.132 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.389 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:11.389 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:11.389 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.389 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:11.389 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.646 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:11.646 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:11.646 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.646 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:11.646 19:41:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.210 19:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:12.210 19:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:12.210 19:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.210 19:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:12.211 19:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.467 19:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:12.467 19:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:12.467 19:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.467 19:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:12.467 19:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.725 19:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:12.725 19:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:12.725 19:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.725 19:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:12.725 19:41:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.982 19:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:12.982 19:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:12.982 19:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.982 19:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:12.982 19:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.238 19:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:13.238 19:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:13.238 19:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.238 19:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:13.238 19:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.803 19:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:13.803 19:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:13.803 19:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.803 19:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:13.803 19:41:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.060 19:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:14.060 19:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:14.060 19:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.060 19:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:14.060 19:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.318 19:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:14.318 19:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:14.318 19:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.318 19:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:14.318 19:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.575 19:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:14.575 19:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:14.575 19:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.575 19:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:14.575 19:41:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.831 19:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:14.831 19:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:14.831 19:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.831 19:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:14.831 19:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.394 19:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:15.395 19:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:15.395 19:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.395 19:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:15.395 19:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.652 19:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:15.652 19:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:15.652 19:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.652 19:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:15.652 19:41:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.909 19:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:15.909 19:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:15.909 19:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.909 19:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:15.909 19:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.167 19:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:16.167 19:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:16.167 19:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.167 19:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:16.167 19:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.425 19:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:16.425 19:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:16.425 19:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.425 19:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:16.425 19:41:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.990 19:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:16.990 19:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:16.990 19:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.990 19:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:16.990 19:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.247 19:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:17.247 19:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:17.247 19:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.247 19:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:17.247 19:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.505 19:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:17.505 19:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:17.505 19:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.505 19:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:17.505 19:41:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:17.762 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:17.762 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:17.762 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:17.762 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:17.762 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.020 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:18.020 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:18.020 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.020 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:18.020 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.584 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:18.584 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:18.584 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:18.584 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:18.584 19:41:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:18.584 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1141424 00:12:18.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1141424) - No such process 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1141424 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # nvmfcleanup 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:18.842 rmmod nvme_tcp 00:12:18.842 rmmod nvme_fabrics 00:12:18.842 rmmod nvme_keyring 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # '[' -n 1141402 ']' 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # killprocess 1141402 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' -z 1141402 ']' 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # kill -0 1141402 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # uname 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1141402 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1141402' 00:12:18.842 killing process with pid 1141402 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # kill 1141402 00:12:18.842 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@975 -- # wait 1141402 00:12:19.101 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:12:19.101 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:12:19.101 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:12:19.101 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.101 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@282 -- # remove_spdk_ns 00:12:19.101 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.101 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.101 19:41:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.037 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:12:21.296 00:12:21.297 real 0m15.285s 00:12:21.297 user 0m38.290s 00:12:21.297 sys 0m5.907s 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # xtrace_disable 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:21.297 ************************************ 00:12:21.297 END TEST nvmf_connect_stress 00:12:21.297 ************************************ 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:21.297 ************************************ 00:12:21.297 START TEST nvmf_fused_ordering 00:12:21.297 ************************************ 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:21.297 * Looking for test storage... 00:12:21.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:21.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@452 -- # prepare_net_devs 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # local -g is_hw=no 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # remove_spdk_ns 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # xtrace_disable 00:12:21.297 19:41:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # pci_devs=() 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -a pci_devs 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # pci_net_devs=() 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # pci_drivers=() 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -A pci_drivers 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # net_devs=() 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # local -ga net_devs 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # e810=() 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # local -ga e810 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # x722=() 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # local -ga x722 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # mlx=() 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # local -ga mlx 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:23.197 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:23.197 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:23.198 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # [[ up == up ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:23.198 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # [[ up == up ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:23.198 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # is_hw=yes 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.198 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.456 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.456 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:12:23.456 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.456 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.456 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.456 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:12:23.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:12:23.456 00:12:23.456 --- 10.0.0.2 ping statistics --- 00:12:23.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.456 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:12:23.456 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:12:23.456 00:12:23.456 --- 10.0.0.1 ping statistics --- 00:12:23.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.456 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:23.456 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.456 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # return 0 00:12:23.456 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:12:23.456 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.456 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:12:23.456 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:12:23.456 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.457 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:12:23.457 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:12:23.457 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:23.457 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:12:23.457 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@725 -- # xtrace_disable 00:12:23.457 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.457 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@485 -- # nvmfpid=1144570 00:12:23.457 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:23.457 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@486 -- # waitforlisten 1144570 00:12:23.457 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # '[' -z 1144570 ']' 00:12:23.457 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.457 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local max_retries=100 00:12:23.457 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.457 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@841 -- # xtrace_disable 00:12:23.457 19:41:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.457 [2024-07-24 19:41:40.715290] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:12:23.457 [2024-07-24 19:41:40.715371] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.457 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.457 [2024-07-24 19:41:40.782422] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.715 [2024-07-24 19:41:40.895479] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.715 [2024-07-24 19:41:40.895532] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.715 [2024-07-24 19:41:40.895561] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.715 [2024-07-24 19:41:40.895573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.715 [2024-07-24 19:41:40.895583] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.715 [2024-07-24 19:41:40.895609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@865 -- # return 0 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@731 -- # xtrace_disable 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.715 [2024-07-24 19:41:41.045325] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.715 [2024-07-24 19:41:41.061566] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.715 NULL1 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:23.715 19:41:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:23.973 [2024-07-24 19:41:41.108179] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:12:23.973 [2024-07-24 19:41:41.108222] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1144711 ] 00:12:23.973 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.231 Attached to nqn.2016-06.io.spdk:cnode1 00:12:24.231 Namespace ID: 1 size: 1GB 00:12:24.231 fused_ordering(0) 00:12:24.231 fused_ordering(1) 00:12:24.231 fused_ordering(2) 00:12:24.231 fused_ordering(3) 00:12:24.231 fused_ordering(4) 00:12:24.231 fused_ordering(5) 00:12:24.231 fused_ordering(6) 00:12:24.231 fused_ordering(7) 00:12:24.231 fused_ordering(8) 00:12:24.231 fused_ordering(9) 00:12:24.231 fused_ordering(10) 00:12:24.231 fused_ordering(11) 00:12:24.231 fused_ordering(12) 00:12:24.231 fused_ordering(13) 00:12:24.231 fused_ordering(14) 00:12:24.231 fused_ordering(15) 00:12:24.231 fused_ordering(16) 00:12:24.231 fused_ordering(17) 00:12:24.231 fused_ordering(18) 00:12:24.231 fused_ordering(19) 00:12:24.231 fused_ordering(20) 00:12:24.231 fused_ordering(21) 00:12:24.231 fused_ordering(22) 00:12:24.231 fused_ordering(23) 00:12:24.231 fused_ordering(24) 00:12:24.231 fused_ordering(25) 00:12:24.231 fused_ordering(26) 00:12:24.231 fused_ordering(27) 00:12:24.231 fused_ordering(28) 00:12:24.231 fused_ordering(29) 00:12:24.231 fused_ordering(30) 00:12:24.231 fused_ordering(31) 00:12:24.231 fused_ordering(32) 00:12:24.231 fused_ordering(33) 00:12:24.231 fused_ordering(34) 00:12:24.231 fused_ordering(35) 00:12:24.231 fused_ordering(36) 00:12:24.231 fused_ordering(37) 00:12:24.231 fused_ordering(38) 00:12:24.231 fused_ordering(39) 00:12:24.231 fused_ordering(40) 00:12:24.231 fused_ordering(41) 00:12:24.231 fused_ordering(42) 00:12:24.231 fused_ordering(43) 00:12:24.231 fused_ordering(44) 00:12:24.231 fused_ordering(45) 00:12:24.231 fused_ordering(46) 00:12:24.231 fused_ordering(47) 00:12:24.231 fused_ordering(48) 00:12:24.231 fused_ordering(49) 00:12:24.231 fused_ordering(50) 00:12:24.231 fused_ordering(51) 00:12:24.231 fused_ordering(52) 00:12:24.231 fused_ordering(53) 00:12:24.231 fused_ordering(54) 00:12:24.231 fused_ordering(55) 00:12:24.231 fused_ordering(56) 00:12:24.231 fused_ordering(57) 00:12:24.231 fused_ordering(58) 00:12:24.231 fused_ordering(59) 00:12:24.231 fused_ordering(60) 00:12:24.231 fused_ordering(61) 00:12:24.231 fused_ordering(62) 00:12:24.231 fused_ordering(63) 00:12:24.231 fused_ordering(64) 00:12:24.231 fused_ordering(65) 00:12:24.231 fused_ordering(66) 00:12:24.231 fused_ordering(67) 00:12:24.231 fused_ordering(68) 00:12:24.231 fused_ordering(69) 00:12:24.231 fused_ordering(70) 00:12:24.231 fused_ordering(71) 00:12:24.231 fused_ordering(72) 00:12:24.231 fused_ordering(73) 00:12:24.231 fused_ordering(74) 00:12:24.231 fused_ordering(75) 00:12:24.231 fused_ordering(76) 00:12:24.231 fused_ordering(77) 00:12:24.231 fused_ordering(78) 00:12:24.231 fused_ordering(79) 00:12:24.231 fused_ordering(80) 00:12:24.231 fused_ordering(81) 00:12:24.231 fused_ordering(82) 00:12:24.231 fused_ordering(83) 00:12:24.231 fused_ordering(84) 00:12:24.231 fused_ordering(85) 00:12:24.231 fused_ordering(86) 00:12:24.231 fused_ordering(87) 00:12:24.231 fused_ordering(88) 00:12:24.231 fused_ordering(89) 00:12:24.231 fused_ordering(90) 00:12:24.231 fused_ordering(91) 00:12:24.231 fused_ordering(92) 00:12:24.231 fused_ordering(93) 00:12:24.231 fused_ordering(94) 00:12:24.231 fused_ordering(95) 00:12:24.231 fused_ordering(96) 00:12:24.231 fused_ordering(97) 00:12:24.231 fused_ordering(98) 00:12:24.231 fused_ordering(99) 00:12:24.231 fused_ordering(100) 00:12:24.231 fused_ordering(101) 00:12:24.231 fused_ordering(102) 00:12:24.231 fused_ordering(103) 00:12:24.231 fused_ordering(104) 00:12:24.231 fused_ordering(105) 00:12:24.231 fused_ordering(106) 00:12:24.231 fused_ordering(107) 00:12:24.231 fused_ordering(108) 00:12:24.231 fused_ordering(109) 00:12:24.231 fused_ordering(110) 00:12:24.231 fused_ordering(111) 00:12:24.231 fused_ordering(112) 00:12:24.231 fused_ordering(113) 00:12:24.231 fused_ordering(114) 00:12:24.231 fused_ordering(115) 00:12:24.231 fused_ordering(116) 00:12:24.231 fused_ordering(117) 00:12:24.231 fused_ordering(118) 00:12:24.231 fused_ordering(119) 00:12:24.231 fused_ordering(120) 00:12:24.231 fused_ordering(121) 00:12:24.231 fused_ordering(122) 00:12:24.231 fused_ordering(123) 00:12:24.231 fused_ordering(124) 00:12:24.231 fused_ordering(125) 00:12:24.231 fused_ordering(126) 00:12:24.231 fused_ordering(127) 00:12:24.231 fused_ordering(128) 00:12:24.231 fused_ordering(129) 00:12:24.231 fused_ordering(130) 00:12:24.231 fused_ordering(131) 00:12:24.231 fused_ordering(132) 00:12:24.231 fused_ordering(133) 00:12:24.231 fused_ordering(134) 00:12:24.231 fused_ordering(135) 00:12:24.231 fused_ordering(136) 00:12:24.231 fused_ordering(137) 00:12:24.231 fused_ordering(138) 00:12:24.231 fused_ordering(139) 00:12:24.231 fused_ordering(140) 00:12:24.231 fused_ordering(141) 00:12:24.231 fused_ordering(142) 00:12:24.231 fused_ordering(143) 00:12:24.231 fused_ordering(144) 00:12:24.231 fused_ordering(145) 00:12:24.231 fused_ordering(146) 00:12:24.231 fused_ordering(147) 00:12:24.231 fused_ordering(148) 00:12:24.231 fused_ordering(149) 00:12:24.231 fused_ordering(150) 00:12:24.231 fused_ordering(151) 00:12:24.231 fused_ordering(152) 00:12:24.231 fused_ordering(153) 00:12:24.231 fused_ordering(154) 00:12:24.231 fused_ordering(155) 00:12:24.231 fused_ordering(156) 00:12:24.231 fused_ordering(157) 00:12:24.231 fused_ordering(158) 00:12:24.231 fused_ordering(159) 00:12:24.231 fused_ordering(160) 00:12:24.231 fused_ordering(161) 00:12:24.231 fused_ordering(162) 00:12:24.231 fused_ordering(163) 00:12:24.231 fused_ordering(164) 00:12:24.231 fused_ordering(165) 00:12:24.231 fused_ordering(166) 00:12:24.231 fused_ordering(167) 00:12:24.231 fused_ordering(168) 00:12:24.231 fused_ordering(169) 00:12:24.231 fused_ordering(170) 00:12:24.231 fused_ordering(171) 00:12:24.231 fused_ordering(172) 00:12:24.231 fused_ordering(173) 00:12:24.231 fused_ordering(174) 00:12:24.231 fused_ordering(175) 00:12:24.231 fused_ordering(176) 00:12:24.231 fused_ordering(177) 00:12:24.231 fused_ordering(178) 00:12:24.231 fused_ordering(179) 00:12:24.231 fused_ordering(180) 00:12:24.231 fused_ordering(181) 00:12:24.231 fused_ordering(182) 00:12:24.231 fused_ordering(183) 00:12:24.231 fused_ordering(184) 00:12:24.231 fused_ordering(185) 00:12:24.231 fused_ordering(186) 00:12:24.232 fused_ordering(187) 00:12:24.232 fused_ordering(188) 00:12:24.232 fused_ordering(189) 00:12:24.232 fused_ordering(190) 00:12:24.232 fused_ordering(191) 00:12:24.232 fused_ordering(192) 00:12:24.232 fused_ordering(193) 00:12:24.232 fused_ordering(194) 00:12:24.232 fused_ordering(195) 00:12:24.232 fused_ordering(196) 00:12:24.232 fused_ordering(197) 00:12:24.232 fused_ordering(198) 00:12:24.232 fused_ordering(199) 00:12:24.232 fused_ordering(200) 00:12:24.232 fused_ordering(201) 00:12:24.232 fused_ordering(202) 00:12:24.232 fused_ordering(203) 00:12:24.232 fused_ordering(204) 00:12:24.232 fused_ordering(205) 00:12:24.797 fused_ordering(206) 00:12:24.797 fused_ordering(207) 00:12:24.797 fused_ordering(208) 00:12:24.797 fused_ordering(209) 00:12:24.797 fused_ordering(210) 00:12:24.797 fused_ordering(211) 00:12:24.797 fused_ordering(212) 00:12:24.797 fused_ordering(213) 00:12:24.797 fused_ordering(214) 00:12:24.797 fused_ordering(215) 00:12:24.797 fused_ordering(216) 00:12:24.797 fused_ordering(217) 00:12:24.797 fused_ordering(218) 00:12:24.797 fused_ordering(219) 00:12:24.797 fused_ordering(220) 00:12:24.797 fused_ordering(221) 00:12:24.797 fused_ordering(222) 00:12:24.797 fused_ordering(223) 00:12:24.797 fused_ordering(224) 00:12:24.797 fused_ordering(225) 00:12:24.797 fused_ordering(226) 00:12:24.797 fused_ordering(227) 00:12:24.797 fused_ordering(228) 00:12:24.797 fused_ordering(229) 00:12:24.797 fused_ordering(230) 00:12:24.797 fused_ordering(231) 00:12:24.797 fused_ordering(232) 00:12:24.797 fused_ordering(233) 00:12:24.797 fused_ordering(234) 00:12:24.797 fused_ordering(235) 00:12:24.797 fused_ordering(236) 00:12:24.798 fused_ordering(237) 00:12:24.798 fused_ordering(238) 00:12:24.798 fused_ordering(239) 00:12:24.798 fused_ordering(240) 00:12:24.798 fused_ordering(241) 00:12:24.798 fused_ordering(242) 00:12:24.798 fused_ordering(243) 00:12:24.798 fused_ordering(244) 00:12:24.798 fused_ordering(245) 00:12:24.798 fused_ordering(246) 00:12:24.798 fused_ordering(247) 00:12:24.798 fused_ordering(248) 00:12:24.798 fused_ordering(249) 00:12:24.798 fused_ordering(250) 00:12:24.798 fused_ordering(251) 00:12:24.798 fused_ordering(252) 00:12:24.798 fused_ordering(253) 00:12:24.798 fused_ordering(254) 00:12:24.798 fused_ordering(255) 00:12:24.798 fused_ordering(256) 00:12:24.798 fused_ordering(257) 00:12:24.798 fused_ordering(258) 00:12:24.798 fused_ordering(259) 00:12:24.798 fused_ordering(260) 00:12:24.798 fused_ordering(261) 00:12:24.798 fused_ordering(262) 00:12:24.798 fused_ordering(263) 00:12:24.798 fused_ordering(264) 00:12:24.798 fused_ordering(265) 00:12:24.798 fused_ordering(266) 00:12:24.798 fused_ordering(267) 00:12:24.798 fused_ordering(268) 00:12:24.798 fused_ordering(269) 00:12:24.798 fused_ordering(270) 00:12:24.798 fused_ordering(271) 00:12:24.798 fused_ordering(272) 00:12:24.798 fused_ordering(273) 00:12:24.798 fused_ordering(274) 00:12:24.798 fused_ordering(275) 00:12:24.798 fused_ordering(276) 00:12:24.798 fused_ordering(277) 00:12:24.798 fused_ordering(278) 00:12:24.798 fused_ordering(279) 00:12:24.798 fused_ordering(280) 00:12:24.798 fused_ordering(281) 00:12:24.798 fused_ordering(282) 00:12:24.798 fused_ordering(283) 00:12:24.798 fused_ordering(284) 00:12:24.798 fused_ordering(285) 00:12:24.798 fused_ordering(286) 00:12:24.798 fused_ordering(287) 00:12:24.798 fused_ordering(288) 00:12:24.798 fused_ordering(289) 00:12:24.798 fused_ordering(290) 00:12:24.798 fused_ordering(291) 00:12:24.798 fused_ordering(292) 00:12:24.798 fused_ordering(293) 00:12:24.798 fused_ordering(294) 00:12:24.798 fused_ordering(295) 00:12:24.798 fused_ordering(296) 00:12:24.798 fused_ordering(297) 00:12:24.798 fused_ordering(298) 00:12:24.798 fused_ordering(299) 00:12:24.798 fused_ordering(300) 00:12:24.798 fused_ordering(301) 00:12:24.798 fused_ordering(302) 00:12:24.798 fused_ordering(303) 00:12:24.798 fused_ordering(304) 00:12:24.798 fused_ordering(305) 00:12:24.798 fused_ordering(306) 00:12:24.798 fused_ordering(307) 00:12:24.798 fused_ordering(308) 00:12:24.798 fused_ordering(309) 00:12:24.798 fused_ordering(310) 00:12:24.798 fused_ordering(311) 00:12:24.798 fused_ordering(312) 00:12:24.798 fused_ordering(313) 00:12:24.798 fused_ordering(314) 00:12:24.798 fused_ordering(315) 00:12:24.798 fused_ordering(316) 00:12:24.798 fused_ordering(317) 00:12:24.798 fused_ordering(318) 00:12:24.798 fused_ordering(319) 00:12:24.798 fused_ordering(320) 00:12:24.798 fused_ordering(321) 00:12:24.798 fused_ordering(322) 00:12:24.798 fused_ordering(323) 00:12:24.798 fused_ordering(324) 00:12:24.798 fused_ordering(325) 00:12:24.798 fused_ordering(326) 00:12:24.798 fused_ordering(327) 00:12:24.798 fused_ordering(328) 00:12:24.798 fused_ordering(329) 00:12:24.798 fused_ordering(330) 00:12:24.798 fused_ordering(331) 00:12:24.798 fused_ordering(332) 00:12:24.798 fused_ordering(333) 00:12:24.798 fused_ordering(334) 00:12:24.798 fused_ordering(335) 00:12:24.798 fused_ordering(336) 00:12:24.798 fused_ordering(337) 00:12:24.798 fused_ordering(338) 00:12:24.798 fused_ordering(339) 00:12:24.798 fused_ordering(340) 00:12:24.798 fused_ordering(341) 00:12:24.798 fused_ordering(342) 00:12:24.798 fused_ordering(343) 00:12:24.798 fused_ordering(344) 00:12:24.798 fused_ordering(345) 00:12:24.798 fused_ordering(346) 00:12:24.798 fused_ordering(347) 00:12:24.798 fused_ordering(348) 00:12:24.798 fused_ordering(349) 00:12:24.798 fused_ordering(350) 00:12:24.798 fused_ordering(351) 00:12:24.798 fused_ordering(352) 00:12:24.798 fused_ordering(353) 00:12:24.798 fused_ordering(354) 00:12:24.798 fused_ordering(355) 00:12:24.798 fused_ordering(356) 00:12:24.798 fused_ordering(357) 00:12:24.798 fused_ordering(358) 00:12:24.798 fused_ordering(359) 00:12:24.798 fused_ordering(360) 00:12:24.798 fused_ordering(361) 00:12:24.798 fused_ordering(362) 00:12:24.798 fused_ordering(363) 00:12:24.798 fused_ordering(364) 00:12:24.798 fused_ordering(365) 00:12:24.798 fused_ordering(366) 00:12:24.798 fused_ordering(367) 00:12:24.798 fused_ordering(368) 00:12:24.798 fused_ordering(369) 00:12:24.798 fused_ordering(370) 00:12:24.798 fused_ordering(371) 00:12:24.798 fused_ordering(372) 00:12:24.798 fused_ordering(373) 00:12:24.798 fused_ordering(374) 00:12:24.798 fused_ordering(375) 00:12:24.798 fused_ordering(376) 00:12:24.798 fused_ordering(377) 00:12:24.798 fused_ordering(378) 00:12:24.798 fused_ordering(379) 00:12:24.798 fused_ordering(380) 00:12:24.798 fused_ordering(381) 00:12:24.798 fused_ordering(382) 00:12:24.798 fused_ordering(383) 00:12:24.798 fused_ordering(384) 00:12:24.798 fused_ordering(385) 00:12:24.798 fused_ordering(386) 00:12:24.798 fused_ordering(387) 00:12:24.798 fused_ordering(388) 00:12:24.798 fused_ordering(389) 00:12:24.798 fused_ordering(390) 00:12:24.798 fused_ordering(391) 00:12:24.798 fused_ordering(392) 00:12:24.798 fused_ordering(393) 00:12:24.798 fused_ordering(394) 00:12:24.798 fused_ordering(395) 00:12:24.798 fused_ordering(396) 00:12:24.798 fused_ordering(397) 00:12:24.798 fused_ordering(398) 00:12:24.798 fused_ordering(399) 00:12:24.798 fused_ordering(400) 00:12:24.798 fused_ordering(401) 00:12:24.798 fused_ordering(402) 00:12:24.798 fused_ordering(403) 00:12:24.798 fused_ordering(404) 00:12:24.798 fused_ordering(405) 00:12:24.798 fused_ordering(406) 00:12:24.798 fused_ordering(407) 00:12:24.798 fused_ordering(408) 00:12:24.798 fused_ordering(409) 00:12:24.798 fused_ordering(410) 00:12:25.363 fused_ordering(411) 00:12:25.363 fused_ordering(412) 00:12:25.363 fused_ordering(413) 00:12:25.363 fused_ordering(414) 00:12:25.363 fused_ordering(415) 00:12:25.363 fused_ordering(416) 00:12:25.363 fused_ordering(417) 00:12:25.363 fused_ordering(418) 00:12:25.363 fused_ordering(419) 00:12:25.363 fused_ordering(420) 00:12:25.363 fused_ordering(421) 00:12:25.363 fused_ordering(422) 00:12:25.363 fused_ordering(423) 00:12:25.363 fused_ordering(424) 00:12:25.363 fused_ordering(425) 00:12:25.363 fused_ordering(426) 00:12:25.363 fused_ordering(427) 00:12:25.363 fused_ordering(428) 00:12:25.363 fused_ordering(429) 00:12:25.363 fused_ordering(430) 00:12:25.363 fused_ordering(431) 00:12:25.363 fused_ordering(432) 00:12:25.363 fused_ordering(433) 00:12:25.363 fused_ordering(434) 00:12:25.363 fused_ordering(435) 00:12:25.363 fused_ordering(436) 00:12:25.363 fused_ordering(437) 00:12:25.363 fused_ordering(438) 00:12:25.363 fused_ordering(439) 00:12:25.363 fused_ordering(440) 00:12:25.363 fused_ordering(441) 00:12:25.363 fused_ordering(442) 00:12:25.363 fused_ordering(443) 00:12:25.363 fused_ordering(444) 00:12:25.363 fused_ordering(445) 00:12:25.363 fused_ordering(446) 00:12:25.363 fused_ordering(447) 00:12:25.363 fused_ordering(448) 00:12:25.363 fused_ordering(449) 00:12:25.364 fused_ordering(450) 00:12:25.364 fused_ordering(451) 00:12:25.364 fused_ordering(452) 00:12:25.364 fused_ordering(453) 00:12:25.364 fused_ordering(454) 00:12:25.364 fused_ordering(455) 00:12:25.364 fused_ordering(456) 00:12:25.364 fused_ordering(457) 00:12:25.364 fused_ordering(458) 00:12:25.364 fused_ordering(459) 00:12:25.364 fused_ordering(460) 00:12:25.364 fused_ordering(461) 00:12:25.364 fused_ordering(462) 00:12:25.364 fused_ordering(463) 00:12:25.364 fused_ordering(464) 00:12:25.364 fused_ordering(465) 00:12:25.364 fused_ordering(466) 00:12:25.364 fused_ordering(467) 00:12:25.364 fused_ordering(468) 00:12:25.364 fused_ordering(469) 00:12:25.364 fused_ordering(470) 00:12:25.364 fused_ordering(471) 00:12:25.364 fused_ordering(472) 00:12:25.364 fused_ordering(473) 00:12:25.364 fused_ordering(474) 00:12:25.364 fused_ordering(475) 00:12:25.364 fused_ordering(476) 00:12:25.364 fused_ordering(477) 00:12:25.364 fused_ordering(478) 00:12:25.364 fused_ordering(479) 00:12:25.364 fused_ordering(480) 00:12:25.364 fused_ordering(481) 00:12:25.364 fused_ordering(482) 00:12:25.364 fused_ordering(483) 00:12:25.364 fused_ordering(484) 00:12:25.364 fused_ordering(485) 00:12:25.364 fused_ordering(486) 00:12:25.364 fused_ordering(487) 00:12:25.364 fused_ordering(488) 00:12:25.364 fused_ordering(489) 00:12:25.364 fused_ordering(490) 00:12:25.364 fused_ordering(491) 00:12:25.364 fused_ordering(492) 00:12:25.364 fused_ordering(493) 00:12:25.364 fused_ordering(494) 00:12:25.364 fused_ordering(495) 00:12:25.364 fused_ordering(496) 00:12:25.364 fused_ordering(497) 00:12:25.364 fused_ordering(498) 00:12:25.364 fused_ordering(499) 00:12:25.364 fused_ordering(500) 00:12:25.364 fused_ordering(501) 00:12:25.364 fused_ordering(502) 00:12:25.364 fused_ordering(503) 00:12:25.364 fused_ordering(504) 00:12:25.364 fused_ordering(505) 00:12:25.364 fused_ordering(506) 00:12:25.364 fused_ordering(507) 00:12:25.364 fused_ordering(508) 00:12:25.364 fused_ordering(509) 00:12:25.364 fused_ordering(510) 00:12:25.364 fused_ordering(511) 00:12:25.364 fused_ordering(512) 00:12:25.364 fused_ordering(513) 00:12:25.364 fused_ordering(514) 00:12:25.364 fused_ordering(515) 00:12:25.364 fused_ordering(516) 00:12:25.364 fused_ordering(517) 00:12:25.364 fused_ordering(518) 00:12:25.364 fused_ordering(519) 00:12:25.364 fused_ordering(520) 00:12:25.364 fused_ordering(521) 00:12:25.364 fused_ordering(522) 00:12:25.364 fused_ordering(523) 00:12:25.364 fused_ordering(524) 00:12:25.364 fused_ordering(525) 00:12:25.364 fused_ordering(526) 00:12:25.364 fused_ordering(527) 00:12:25.364 fused_ordering(528) 00:12:25.364 fused_ordering(529) 00:12:25.364 fused_ordering(530) 00:12:25.364 fused_ordering(531) 00:12:25.364 fused_ordering(532) 00:12:25.364 fused_ordering(533) 00:12:25.364 fused_ordering(534) 00:12:25.364 fused_ordering(535) 00:12:25.364 fused_ordering(536) 00:12:25.364 fused_ordering(537) 00:12:25.364 fused_ordering(538) 00:12:25.364 fused_ordering(539) 00:12:25.364 fused_ordering(540) 00:12:25.364 fused_ordering(541) 00:12:25.364 fused_ordering(542) 00:12:25.364 fused_ordering(543) 00:12:25.364 fused_ordering(544) 00:12:25.364 fused_ordering(545) 00:12:25.364 fused_ordering(546) 00:12:25.364 fused_ordering(547) 00:12:25.364 fused_ordering(548) 00:12:25.364 fused_ordering(549) 00:12:25.364 fused_ordering(550) 00:12:25.364 fused_ordering(551) 00:12:25.364 fused_ordering(552) 00:12:25.364 fused_ordering(553) 00:12:25.364 fused_ordering(554) 00:12:25.364 fused_ordering(555) 00:12:25.364 fused_ordering(556) 00:12:25.364 fused_ordering(557) 00:12:25.364 fused_ordering(558) 00:12:25.364 fused_ordering(559) 00:12:25.364 fused_ordering(560) 00:12:25.364 fused_ordering(561) 00:12:25.364 fused_ordering(562) 00:12:25.364 fused_ordering(563) 00:12:25.364 fused_ordering(564) 00:12:25.364 fused_ordering(565) 00:12:25.364 fused_ordering(566) 00:12:25.364 fused_ordering(567) 00:12:25.364 fused_ordering(568) 00:12:25.364 fused_ordering(569) 00:12:25.364 fused_ordering(570) 00:12:25.364 fused_ordering(571) 00:12:25.364 fused_ordering(572) 00:12:25.364 fused_ordering(573) 00:12:25.364 fused_ordering(574) 00:12:25.364 fused_ordering(575) 00:12:25.364 fused_ordering(576) 00:12:25.364 fused_ordering(577) 00:12:25.364 fused_ordering(578) 00:12:25.364 fused_ordering(579) 00:12:25.364 fused_ordering(580) 00:12:25.364 fused_ordering(581) 00:12:25.364 fused_ordering(582) 00:12:25.364 fused_ordering(583) 00:12:25.364 fused_ordering(584) 00:12:25.364 fused_ordering(585) 00:12:25.364 fused_ordering(586) 00:12:25.364 fused_ordering(587) 00:12:25.364 fused_ordering(588) 00:12:25.364 fused_ordering(589) 00:12:25.364 fused_ordering(590) 00:12:25.364 fused_ordering(591) 00:12:25.364 fused_ordering(592) 00:12:25.364 fused_ordering(593) 00:12:25.364 fused_ordering(594) 00:12:25.364 fused_ordering(595) 00:12:25.364 fused_ordering(596) 00:12:25.364 fused_ordering(597) 00:12:25.364 fused_ordering(598) 00:12:25.364 fused_ordering(599) 00:12:25.364 fused_ordering(600) 00:12:25.364 fused_ordering(601) 00:12:25.364 fused_ordering(602) 00:12:25.364 fused_ordering(603) 00:12:25.364 fused_ordering(604) 00:12:25.364 fused_ordering(605) 00:12:25.364 fused_ordering(606) 00:12:25.364 fused_ordering(607) 00:12:25.364 fused_ordering(608) 00:12:25.364 fused_ordering(609) 00:12:25.364 fused_ordering(610) 00:12:25.364 fused_ordering(611) 00:12:25.364 fused_ordering(612) 00:12:25.364 fused_ordering(613) 00:12:25.364 fused_ordering(614) 00:12:25.364 fused_ordering(615) 00:12:25.929 fused_ordering(616) 00:12:25.929 fused_ordering(617) 00:12:25.929 fused_ordering(618) 00:12:25.929 fused_ordering(619) 00:12:25.929 fused_ordering(620) 00:12:25.929 fused_ordering(621) 00:12:25.929 fused_ordering(622) 00:12:25.929 fused_ordering(623) 00:12:25.929 fused_ordering(624) 00:12:25.929 fused_ordering(625) 00:12:25.929 fused_ordering(626) 00:12:25.929 fused_ordering(627) 00:12:25.929 fused_ordering(628) 00:12:25.929 fused_ordering(629) 00:12:25.929 fused_ordering(630) 00:12:25.929 fused_ordering(631) 00:12:25.929 fused_ordering(632) 00:12:25.929 fused_ordering(633) 00:12:25.929 fused_ordering(634) 00:12:25.929 fused_ordering(635) 00:12:25.929 fused_ordering(636) 00:12:25.929 fused_ordering(637) 00:12:25.929 fused_ordering(638) 00:12:25.929 fused_ordering(639) 00:12:25.929 fused_ordering(640) 00:12:25.929 fused_ordering(641) 00:12:25.929 fused_ordering(642) 00:12:25.929 fused_ordering(643) 00:12:25.929 fused_ordering(644) 00:12:25.929 fused_ordering(645) 00:12:25.929 fused_ordering(646) 00:12:25.929 fused_ordering(647) 00:12:25.929 fused_ordering(648) 00:12:25.929 fused_ordering(649) 00:12:25.929 fused_ordering(650) 00:12:25.929 fused_ordering(651) 00:12:25.929 fused_ordering(652) 00:12:25.929 fused_ordering(653) 00:12:25.929 fused_ordering(654) 00:12:25.929 fused_ordering(655) 00:12:25.929 fused_ordering(656) 00:12:25.929 fused_ordering(657) 00:12:25.929 fused_ordering(658) 00:12:25.929 fused_ordering(659) 00:12:25.929 fused_ordering(660) 00:12:25.929 fused_ordering(661) 00:12:25.929 fused_ordering(662) 00:12:25.929 fused_ordering(663) 00:12:25.929 fused_ordering(664) 00:12:25.929 fused_ordering(665) 00:12:25.929 fused_ordering(666) 00:12:25.929 fused_ordering(667) 00:12:25.929 fused_ordering(668) 00:12:25.929 fused_ordering(669) 00:12:25.929 fused_ordering(670) 00:12:25.929 fused_ordering(671) 00:12:25.929 fused_ordering(672) 00:12:25.929 fused_ordering(673) 00:12:25.929 fused_ordering(674) 00:12:25.929 fused_ordering(675) 00:12:25.929 fused_ordering(676) 00:12:25.929 fused_ordering(677) 00:12:25.929 fused_ordering(678) 00:12:25.929 fused_ordering(679) 00:12:25.929 fused_ordering(680) 00:12:25.929 fused_ordering(681) 00:12:25.929 fused_ordering(682) 00:12:25.929 fused_ordering(683) 00:12:25.929 fused_ordering(684) 00:12:25.929 fused_ordering(685) 00:12:25.929 fused_ordering(686) 00:12:25.929 fused_ordering(687) 00:12:25.929 fused_ordering(688) 00:12:25.929 fused_ordering(689) 00:12:25.929 fused_ordering(690) 00:12:25.929 fused_ordering(691) 00:12:25.929 fused_ordering(692) 00:12:25.929 fused_ordering(693) 00:12:25.929 fused_ordering(694) 00:12:25.929 fused_ordering(695) 00:12:25.929 fused_ordering(696) 00:12:25.929 fused_ordering(697) 00:12:25.929 fused_ordering(698) 00:12:25.929 fused_ordering(699) 00:12:25.929 fused_ordering(700) 00:12:25.929 fused_ordering(701) 00:12:25.929 fused_ordering(702) 00:12:25.929 fused_ordering(703) 00:12:25.929 fused_ordering(704) 00:12:25.929 fused_ordering(705) 00:12:25.929 fused_ordering(706) 00:12:25.929 fused_ordering(707) 00:12:25.929 fused_ordering(708) 00:12:25.929 fused_ordering(709) 00:12:25.929 fused_ordering(710) 00:12:25.929 fused_ordering(711) 00:12:25.929 fused_ordering(712) 00:12:25.929 fused_ordering(713) 00:12:25.929 fused_ordering(714) 00:12:25.929 fused_ordering(715) 00:12:25.929 fused_ordering(716) 00:12:25.929 fused_ordering(717) 00:12:25.929 fused_ordering(718) 00:12:25.929 fused_ordering(719) 00:12:25.929 fused_ordering(720) 00:12:25.929 fused_ordering(721) 00:12:25.929 fused_ordering(722) 00:12:25.929 fused_ordering(723) 00:12:25.929 fused_ordering(724) 00:12:25.929 fused_ordering(725) 00:12:25.929 fused_ordering(726) 00:12:25.929 fused_ordering(727) 00:12:25.929 fused_ordering(728) 00:12:25.930 fused_ordering(729) 00:12:25.930 fused_ordering(730) 00:12:25.930 fused_ordering(731) 00:12:25.930 fused_ordering(732) 00:12:25.930 fused_ordering(733) 00:12:25.930 fused_ordering(734) 00:12:25.930 fused_ordering(735) 00:12:25.930 fused_ordering(736) 00:12:25.930 fused_ordering(737) 00:12:25.930 fused_ordering(738) 00:12:25.930 fused_ordering(739) 00:12:25.930 fused_ordering(740) 00:12:25.930 fused_ordering(741) 00:12:25.930 fused_ordering(742) 00:12:25.930 fused_ordering(743) 00:12:25.930 fused_ordering(744) 00:12:25.930 fused_ordering(745) 00:12:25.930 fused_ordering(746) 00:12:25.930 fused_ordering(747) 00:12:25.930 fused_ordering(748) 00:12:25.930 fused_ordering(749) 00:12:25.930 fused_ordering(750) 00:12:25.930 fused_ordering(751) 00:12:25.930 fused_ordering(752) 00:12:25.930 fused_ordering(753) 00:12:25.930 fused_ordering(754) 00:12:25.930 fused_ordering(755) 00:12:25.930 fused_ordering(756) 00:12:25.930 fused_ordering(757) 00:12:25.930 fused_ordering(758) 00:12:25.930 fused_ordering(759) 00:12:25.930 fused_ordering(760) 00:12:25.930 fused_ordering(761) 00:12:25.930 fused_ordering(762) 00:12:25.930 fused_ordering(763) 00:12:25.930 fused_ordering(764) 00:12:25.930 fused_ordering(765) 00:12:25.930 fused_ordering(766) 00:12:25.930 fused_ordering(767) 00:12:25.930 fused_ordering(768) 00:12:25.930 fused_ordering(769) 00:12:25.930 fused_ordering(770) 00:12:25.930 fused_ordering(771) 00:12:25.930 fused_ordering(772) 00:12:25.930 fused_ordering(773) 00:12:25.930 fused_ordering(774) 00:12:25.930 fused_ordering(775) 00:12:25.930 fused_ordering(776) 00:12:25.930 fused_ordering(777) 00:12:25.930 fused_ordering(778) 00:12:25.930 fused_ordering(779) 00:12:25.930 fused_ordering(780) 00:12:25.930 fused_ordering(781) 00:12:25.930 fused_ordering(782) 00:12:25.930 fused_ordering(783) 00:12:25.930 fused_ordering(784) 00:12:25.930 fused_ordering(785) 00:12:25.930 fused_ordering(786) 00:12:25.930 fused_ordering(787) 00:12:25.930 fused_ordering(788) 00:12:25.930 fused_ordering(789) 00:12:25.930 fused_ordering(790) 00:12:25.930 fused_ordering(791) 00:12:25.930 fused_ordering(792) 00:12:25.930 fused_ordering(793) 00:12:25.930 fused_ordering(794) 00:12:25.930 fused_ordering(795) 00:12:25.930 fused_ordering(796) 00:12:25.930 fused_ordering(797) 00:12:25.930 fused_ordering(798) 00:12:25.930 fused_ordering(799) 00:12:25.930 fused_ordering(800) 00:12:25.930 fused_ordering(801) 00:12:25.930 fused_ordering(802) 00:12:25.930 fused_ordering(803) 00:12:25.930 fused_ordering(804) 00:12:25.930 fused_ordering(805) 00:12:25.930 fused_ordering(806) 00:12:25.930 fused_ordering(807) 00:12:25.930 fused_ordering(808) 00:12:25.930 fused_ordering(809) 00:12:25.930 fused_ordering(810) 00:12:25.930 fused_ordering(811) 00:12:25.930 fused_ordering(812) 00:12:25.930 fused_ordering(813) 00:12:25.930 fused_ordering(814) 00:12:25.930 fused_ordering(815) 00:12:25.930 fused_ordering(816) 00:12:25.930 fused_ordering(817) 00:12:25.930 fused_ordering(818) 00:12:25.930 fused_ordering(819) 00:12:25.930 fused_ordering(820) 00:12:26.493 fused_ordering(821) 00:12:26.493 fused_ordering(822) 00:12:26.493 fused_ordering(823) 00:12:26.493 fused_ordering(824) 00:12:26.493 fused_ordering(825) 00:12:26.493 fused_ordering(826) 00:12:26.493 fused_ordering(827) 00:12:26.493 fused_ordering(828) 00:12:26.493 fused_ordering(829) 00:12:26.493 fused_ordering(830) 00:12:26.493 fused_ordering(831) 00:12:26.493 fused_ordering(832) 00:12:26.493 fused_ordering(833) 00:12:26.493 fused_ordering(834) 00:12:26.493 fused_ordering(835) 00:12:26.493 fused_ordering(836) 00:12:26.493 fused_ordering(837) 00:12:26.493 fused_ordering(838) 00:12:26.493 fused_ordering(839) 00:12:26.493 fused_ordering(840) 00:12:26.493 fused_ordering(841) 00:12:26.493 fused_ordering(842) 00:12:26.493 fused_ordering(843) 00:12:26.493 fused_ordering(844) 00:12:26.493 fused_ordering(845) 00:12:26.493 fused_ordering(846) 00:12:26.493 fused_ordering(847) 00:12:26.493 fused_ordering(848) 00:12:26.493 fused_ordering(849) 00:12:26.493 fused_ordering(850) 00:12:26.493 fused_ordering(851) 00:12:26.493 fused_ordering(852) 00:12:26.493 fused_ordering(853) 00:12:26.493 fused_ordering(854) 00:12:26.493 fused_ordering(855) 00:12:26.493 fused_ordering(856) 00:12:26.493 fused_ordering(857) 00:12:26.493 fused_ordering(858) 00:12:26.493 fused_ordering(859) 00:12:26.493 fused_ordering(860) 00:12:26.493 fused_ordering(861) 00:12:26.493 fused_ordering(862) 00:12:26.493 fused_ordering(863) 00:12:26.493 fused_ordering(864) 00:12:26.493 fused_ordering(865) 00:12:26.493 fused_ordering(866) 00:12:26.493 fused_ordering(867) 00:12:26.493 fused_ordering(868) 00:12:26.493 fused_ordering(869) 00:12:26.493 fused_ordering(870) 00:12:26.493 fused_ordering(871) 00:12:26.493 fused_ordering(872) 00:12:26.493 fused_ordering(873) 00:12:26.493 fused_ordering(874) 00:12:26.493 fused_ordering(875) 00:12:26.493 fused_ordering(876) 00:12:26.493 fused_ordering(877) 00:12:26.493 fused_ordering(878) 00:12:26.493 fused_ordering(879) 00:12:26.493 fused_ordering(880) 00:12:26.493 fused_ordering(881) 00:12:26.493 fused_ordering(882) 00:12:26.493 fused_ordering(883) 00:12:26.493 fused_ordering(884) 00:12:26.493 fused_ordering(885) 00:12:26.493 fused_ordering(886) 00:12:26.493 fused_ordering(887) 00:12:26.493 fused_ordering(888) 00:12:26.493 fused_ordering(889) 00:12:26.493 fused_ordering(890) 00:12:26.493 fused_ordering(891) 00:12:26.493 fused_ordering(892) 00:12:26.493 fused_ordering(893) 00:12:26.493 fused_ordering(894) 00:12:26.493 fused_ordering(895) 00:12:26.493 fused_ordering(896) 00:12:26.493 fused_ordering(897) 00:12:26.493 fused_ordering(898) 00:12:26.493 fused_ordering(899) 00:12:26.493 fused_ordering(900) 00:12:26.493 fused_ordering(901) 00:12:26.493 fused_ordering(902) 00:12:26.493 fused_ordering(903) 00:12:26.493 fused_ordering(904) 00:12:26.493 fused_ordering(905) 00:12:26.493 fused_ordering(906) 00:12:26.493 fused_ordering(907) 00:12:26.493 fused_ordering(908) 00:12:26.493 fused_ordering(909) 00:12:26.493 fused_ordering(910) 00:12:26.493 fused_ordering(911) 00:12:26.493 fused_ordering(912) 00:12:26.493 fused_ordering(913) 00:12:26.493 fused_ordering(914) 00:12:26.494 fused_ordering(915) 00:12:26.494 fused_ordering(916) 00:12:26.494 fused_ordering(917) 00:12:26.494 fused_ordering(918) 00:12:26.494 fused_ordering(919) 00:12:26.494 fused_ordering(920) 00:12:26.494 fused_ordering(921) 00:12:26.494 fused_ordering(922) 00:12:26.494 fused_ordering(923) 00:12:26.494 fused_ordering(924) 00:12:26.494 fused_ordering(925) 00:12:26.494 fused_ordering(926) 00:12:26.494 fused_ordering(927) 00:12:26.494 fused_ordering(928) 00:12:26.494 fused_ordering(929) 00:12:26.494 fused_ordering(930) 00:12:26.494 fused_ordering(931) 00:12:26.494 fused_ordering(932) 00:12:26.494 fused_ordering(933) 00:12:26.494 fused_ordering(934) 00:12:26.494 fused_ordering(935) 00:12:26.494 fused_ordering(936) 00:12:26.494 fused_ordering(937) 00:12:26.494 fused_ordering(938) 00:12:26.494 fused_ordering(939) 00:12:26.494 fused_ordering(940) 00:12:26.494 fused_ordering(941) 00:12:26.494 fused_ordering(942) 00:12:26.494 fused_ordering(943) 00:12:26.494 fused_ordering(944) 00:12:26.494 fused_ordering(945) 00:12:26.494 fused_ordering(946) 00:12:26.494 fused_ordering(947) 00:12:26.494 fused_ordering(948) 00:12:26.494 fused_ordering(949) 00:12:26.494 fused_ordering(950) 00:12:26.494 fused_ordering(951) 00:12:26.494 fused_ordering(952) 00:12:26.494 fused_ordering(953) 00:12:26.494 fused_ordering(954) 00:12:26.494 fused_ordering(955) 00:12:26.494 fused_ordering(956) 00:12:26.494 fused_ordering(957) 00:12:26.494 fused_ordering(958) 00:12:26.494 fused_ordering(959) 00:12:26.494 fused_ordering(960) 00:12:26.494 fused_ordering(961) 00:12:26.494 fused_ordering(962) 00:12:26.494 fused_ordering(963) 00:12:26.494 fused_ordering(964) 00:12:26.494 fused_ordering(965) 00:12:26.494 fused_ordering(966) 00:12:26.494 fused_ordering(967) 00:12:26.494 fused_ordering(968) 00:12:26.494 fused_ordering(969) 00:12:26.494 fused_ordering(970) 00:12:26.494 fused_ordering(971) 00:12:26.494 fused_ordering(972) 00:12:26.494 fused_ordering(973) 00:12:26.494 fused_ordering(974) 00:12:26.494 fused_ordering(975) 00:12:26.494 fused_ordering(976) 00:12:26.494 fused_ordering(977) 00:12:26.494 fused_ordering(978) 00:12:26.494 fused_ordering(979) 00:12:26.494 fused_ordering(980) 00:12:26.494 fused_ordering(981) 00:12:26.494 fused_ordering(982) 00:12:26.494 fused_ordering(983) 00:12:26.494 fused_ordering(984) 00:12:26.494 fused_ordering(985) 00:12:26.494 fused_ordering(986) 00:12:26.494 fused_ordering(987) 00:12:26.494 fused_ordering(988) 00:12:26.494 fused_ordering(989) 00:12:26.494 fused_ordering(990) 00:12:26.494 fused_ordering(991) 00:12:26.494 fused_ordering(992) 00:12:26.494 fused_ordering(993) 00:12:26.494 fused_ordering(994) 00:12:26.494 fused_ordering(995) 00:12:26.494 fused_ordering(996) 00:12:26.494 fused_ordering(997) 00:12:26.494 fused_ordering(998) 00:12:26.494 fused_ordering(999) 00:12:26.494 fused_ordering(1000) 00:12:26.494 fused_ordering(1001) 00:12:26.494 fused_ordering(1002) 00:12:26.494 fused_ordering(1003) 00:12:26.494 fused_ordering(1004) 00:12:26.494 fused_ordering(1005) 00:12:26.494 fused_ordering(1006) 00:12:26.494 fused_ordering(1007) 00:12:26.494 fused_ordering(1008) 00:12:26.494 fused_ordering(1009) 00:12:26.494 fused_ordering(1010) 00:12:26.494 fused_ordering(1011) 00:12:26.494 fused_ordering(1012) 00:12:26.494 fused_ordering(1013) 00:12:26.494 fused_ordering(1014) 00:12:26.494 fused_ordering(1015) 00:12:26.494 fused_ordering(1016) 00:12:26.494 fused_ordering(1017) 00:12:26.494 fused_ordering(1018) 00:12:26.494 fused_ordering(1019) 00:12:26.494 fused_ordering(1020) 00:12:26.494 fused_ordering(1021) 00:12:26.494 fused_ordering(1022) 00:12:26.494 fused_ordering(1023) 00:12:26.494 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:26.494 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:26.494 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # nvmfcleanup 00:12:26.494 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:12:26.494 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.494 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:12:26.494 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.494 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.494 rmmod nvme_tcp 00:12:26.494 rmmod nvme_fabrics 00:12:26.494 rmmod nvme_keyring 00:12:26.754 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.754 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:12:26.754 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:12:26.754 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # '[' -n 1144570 ']' 00:12:26.754 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # killprocess 1144570 00:12:26.754 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' -z 1144570 ']' 00:12:26.754 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # kill -0 1144570 00:12:26.754 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # uname 00:12:26.754 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:12:26.754 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1144570 00:12:26.754 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:12:26.754 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:12:26.754 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1144570' 00:12:26.754 killing process with pid 1144570 00:12:26.754 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # kill 1144570 00:12:26.754 19:41:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@975 -- # wait 1144570 00:12:27.014 19:41:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:12:27.014 19:41:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:12:27.014 19:41:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:12:27.014 19:41:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.014 19:41:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@282 -- # remove_spdk_ns 00:12:27.014 19:41:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.014 19:41:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.014 19:41:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.916 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:12:28.916 00:12:28.916 real 0m7.778s 00:12:28.916 user 0m5.396s 00:12:28.916 sys 0m3.407s 00:12:28.916 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # xtrace_disable 00:12:28.916 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:28.916 ************************************ 00:12:28.916 END TEST nvmf_fused_ordering 00:12:28.916 ************************************ 00:12:28.916 19:41:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:28.916 19:41:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:12:28.916 19:41:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:12:28.916 19:41:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.175 ************************************ 00:12:29.175 START TEST nvmf_ns_masking 00:12:29.175 ************************************ 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:29.175 * Looking for test storage... 00:12:29.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.175 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c3b28aab-c6ab-4500-afc3-58740fa15f95 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=51c1442b-1d23-4847-91a9-9151dc2afe31 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=580a8b9a-3820-4d89-b5c8-bb58d7c82ba0 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@452 -- # prepare_net_devs 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # local -g is_hw=no 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # remove_spdk_ns 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # xtrace_disable 00:12:29.176 19:41:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:31.078 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.078 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # pci_devs=() 00:12:31.078 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -a pci_devs 00:12:31.078 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # pci_net_devs=() 00:12:31.078 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:12:31.078 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # pci_drivers=() 00:12:31.078 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -A pci_drivers 00:12:31.078 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # net_devs=() 00:12:31.078 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # local -ga net_devs 00:12:31.078 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # e810=() 00:12:31.078 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # local -ga e810 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # x722=() 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # local -ga x722 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # mlx=() 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # local -ga mlx 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:31.079 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:31.079 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # [[ up == up ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:31.079 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # [[ up == up ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:31.079 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # is_hw=yes 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.079 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.337 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.337 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.337 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:12:31.337 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.337 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.337 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.337 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:12:31.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:12:31.337 00:12:31.337 --- 10.0.0.2 ping statistics --- 00:12:31.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.337 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:12:31.337 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:12:31.337 00:12:31.337 --- 10.0.0.1 ping statistics --- 00:12:31.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.337 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:31.337 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.337 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # return 0 00:12:31.337 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:12:31.337 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@725 -- # xtrace_disable 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@485 -- # nvmfpid=1146920 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@486 -- # waitforlisten 1146920 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@832 -- # '[' -z 1146920 ']' 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local max_retries=100 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@841 -- # xtrace_disable 00:12:31.338 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:31.338 [2024-07-24 19:41:48.640594] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:12:31.338 [2024-07-24 19:41:48.640684] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.338 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.338 [2024-07-24 19:41:48.715912] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.596 [2024-07-24 19:41:48.837709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.596 [2024-07-24 19:41:48.837775] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.596 [2024-07-24 19:41:48.837793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.596 [2024-07-24 19:41:48.837806] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.596 [2024-07-24 19:41:48.837818] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.596 [2024-07-24 19:41:48.837849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.596 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:12:31.596 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@865 -- # return 0 00:12:31.596 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:12:31.596 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@731 -- # xtrace_disable 00:12:31.596 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:31.854 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.854 19:41:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:32.112 [2024-07-24 19:41:49.258893] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.112 19:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:32.112 19:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:32.112 19:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:32.370 Malloc1 00:12:32.370 19:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:32.628 Malloc2 00:12:32.628 19:41:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:32.886 19:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:33.143 19:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.401 [2024-07-24 19:41:50.572096] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.401 19:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:33.401 19:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 580a8b9a-3820-4d89-b5c8-bb58d7c82ba0 -a 10.0.0.2 -s 4420 -i 4 00:12:33.401 19:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.401 19:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local i=0 00:12:33.401 19:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.401 19:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # [[ -n '' ]] 00:12:33.401 19:41:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # sleep 2 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # nvme_devices=1 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # return 0 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.931 [ 0]:0x1 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e8c56b0549f84b6e9b4832c2c718e131 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e8c56b0549f84b6e9b4832c2c718e131 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.931 19:41:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:35.931 [ 0]:0x1 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e8c56b0549f84b6e9b4832c2c718e131 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e8c56b0549f84b6e9b4832c2c718e131 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:35.931 [ 1]:0x2 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ea790a41a9db493db28fa75efc0b35fa 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ea790a41a9db493db28fa75efc0b35fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.931 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.193 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:36.452 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:36.452 19:41:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 580a8b9a-3820-4d89-b5c8-bb58d7c82ba0 -a 10.0.0.2 -s 4420 -i 4 00:12:36.711 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:36.711 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local i=0 00:12:36.711 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.711 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # [[ -n 1 ]] 00:12:36.711 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # nvme_device_counter=1 00:12:36.711 19:41:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # sleep 2 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # nvme_devices=1 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # return 0 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # local es=0 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # valid_exec_arg ns_is_visible 0x1 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@639 -- # local arg=ns_is_visible 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -t ns_is_visible 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # ns_is_visible 0x1 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # es=1 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.238 [ 0]:0x2 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.238 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ea790a41a9db493db28fa75efc0b35fa 00:12:39.239 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ea790a41a9db493db28fa75efc0b35fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.239 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:39.239 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:39.239 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.239 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.239 [ 0]:0x1 00:12:39.239 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.239 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.239 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e8c56b0549f84b6e9b4832c2c718e131 00:12:39.239 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e8c56b0549f84b6e9b4832c2c718e131 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.239 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:39.239 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.239 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.239 [ 1]:0x2 00:12:39.239 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.239 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.522 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ea790a41a9db493db28fa75efc0b35fa 00:12:39.522 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ea790a41a9db493db28fa75efc0b35fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.522 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:39.522 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:39.522 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # local es=0 00:12:39.522 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # valid_exec_arg ns_is_visible 0x1 00:12:39.522 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@639 -- # local arg=ns_is_visible 00:12:39.522 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:12:39.522 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -t ns_is_visible 00:12:39.522 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:12:39.522 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # ns_is_visible 0x1 00:12:39.522 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.522 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:39.522 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:39.522 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.783 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:39.783 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.783 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # es=1 00:12:39.783 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:12:39.783 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:12:39.783 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:12:39.783 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:39.783 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:39.783 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:39.783 [ 0]:0x2 00:12:39.783 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:39.783 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:39.783 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ea790a41a9db493db28fa75efc0b35fa 00:12:39.783 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ea790a41a9db493db28fa75efc0b35fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:39.783 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:39.783 19:41:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:39.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.783 19:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:40.040 19:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:40.040 19:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 580a8b9a-3820-4d89-b5c8-bb58d7c82ba0 -a 10.0.0.2 -s 4420 -i 4 00:12:40.040 19:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:40.040 19:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local i=0 00:12:40.040 19:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.040 19:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # [[ -n 2 ]] 00:12:40.040 19:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # nvme_device_counter=2 00:12:40.040 19:41:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # sleep 2 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # nvme_devices=2 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # return 0 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:42.567 [ 0]:0x1 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e8c56b0549f84b6e9b4832c2c718e131 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e8c56b0549f84b6e9b4832c2c718e131 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:42.567 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:42.568 [ 1]:0x2 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ea790a41a9db493db28fa75efc0b35fa 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ea790a41a9db493db28fa75efc0b35fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # local es=0 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # valid_exec_arg ns_is_visible 0x1 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@639 -- # local arg=ns_is_visible 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -t ns_is_visible 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # ns_is_visible 0x1 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # es=1 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:42.568 [ 0]:0x2 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ea790a41a9db493db28fa75efc0b35fa 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ea790a41a9db493db28fa75efc0b35fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # local es=0 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@639 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@645 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:42.568 19:41:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:42.826 [2024-07-24 19:42:00.129280] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:42.826 request: 00:12:42.826 { 00:12:42.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:42.826 "nsid": 2, 00:12:42.826 "host": "nqn.2016-06.io.spdk:host1", 00:12:42.826 "method": "nvmf_ns_remove_host", 00:12:42.826 "req_id": 1 00:12:42.826 } 00:12:42.826 Got JSON-RPC error response 00:12:42.826 response: 00:12:42.826 { 00:12:42.826 "code": -32602, 00:12:42.826 "message": "Invalid parameters" 00:12:42.826 } 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # es=1 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # local es=0 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # valid_exec_arg ns_is_visible 0x1 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@639 -- # local arg=ns_is_visible 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -t ns_is_visible 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # ns_is_visible 0x1 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # es=1 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:12:42.826 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:43.084 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.084 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.084 [ 0]:0x2 00:12:43.084 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.084 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.084 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ea790a41a9db493db28fa75efc0b35fa 00:12:43.084 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ea790a41a9db493db28fa75efc0b35fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.084 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:43.085 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.085 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1148568 00:12:43.085 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:43.085 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.085 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1148568 /var/tmp/host.sock 00:12:43.085 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@832 -- # '[' -z 1148568 ']' 00:12:43.085 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/host.sock 00:12:43.085 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local max_retries=100 00:12:43.085 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:43.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:43.085 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@841 -- # xtrace_disable 00:12:43.085 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:43.344 [2024-07-24 19:42:00.472115] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:12:43.344 [2024-07-24 19:42:00.472193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148568 ] 00:12:43.344 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.344 [2024-07-24 19:42:00.534963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.344 [2024-07-24 19:42:00.651768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:43.602 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:12:43.602 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@865 -- # return 0 00:12:43.602 19:42:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:43.860 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:44.117 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c3b28aab-c6ab-4500-afc3-58740fa15f95 00:12:44.117 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@763 -- # tr -d - 00:12:44.117 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C3B28AABC6AB4500AFC358740FA15F95 -i 00:12:44.374 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 51c1442b-1d23-4847-91a9-9151dc2afe31 00:12:44.374 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@763 -- # tr -d - 00:12:44.374 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 51C1442B1D23484791A99151DC2AFE31 -i 00:12:44.632 19:42:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:44.890 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:45.148 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:45.148 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:45.714 nvme0n1 00:12:45.714 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:45.714 19:42:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:45.972 nvme1n2 00:12:45.972 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:45.972 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:45.972 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:45.972 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:45.972 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:46.535 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:46.535 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:46.535 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:46.535 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:46.535 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c3b28aab-c6ab-4500-afc3-58740fa15f95 == \c\3\b\2\8\a\a\b\-\c\6\a\b\-\4\5\0\0\-\a\f\c\3\-\5\8\7\4\0\f\a\1\5\f\9\5 ]] 00:12:46.535 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:46.535 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:46.535 19:42:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:46.792 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 51c1442b-1d23-4847-91a9-9151dc2afe31 == \5\1\c\1\4\4\2\b\-\1\d\2\3\-\4\8\4\7\-\9\1\a\9\-\9\1\5\1\d\c\2\a\f\e\3\1 ]] 00:12:46.792 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1148568 00:12:46.792 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' -z 1148568 ']' 00:12:46.792 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # kill -0 1148568 00:12:46.792 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # uname 00:12:46.792 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:12:46.792 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1148568 00:12:47.049 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:12:47.049 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:12:47.049 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1148568' 00:12:47.049 killing process with pid 1148568 00:12:47.049 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # kill 1148568 00:12:47.049 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@975 -- # wait 1148568 00:12:47.306 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.563 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:47.563 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:47.563 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # nvmfcleanup 00:12:47.563 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:47.563 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:47.563 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:47.563 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:47.563 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:47.563 rmmod nvme_tcp 00:12:47.563 rmmod nvme_fabrics 00:12:47.563 rmmod nvme_keyring 00:12:47.821 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:47.821 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:47.821 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:47.821 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # '[' -n 1146920 ']' 00:12:47.821 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # killprocess 1146920 00:12:47.821 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' -z 1146920 ']' 00:12:47.821 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # kill -0 1146920 00:12:47.821 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # uname 00:12:47.821 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:12:47.821 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1146920 00:12:47.821 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:12:47.821 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:12:47.821 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1146920' 00:12:47.821 killing process with pid 1146920 00:12:47.821 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # kill 1146920 00:12:47.821 19:42:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@975 -- # wait 1146920 00:12:48.079 19:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:12:48.079 19:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:12:48.079 19:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:12:48.079 19:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.079 19:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@282 -- # remove_spdk_ns 00:12:48.079 19:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.079 19:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:48.079 19:42:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.979 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:12:49.979 00:12:49.979 real 0m21.060s 00:12:49.979 user 0m27.452s 00:12:49.979 sys 0m4.140s 00:12:49.979 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # xtrace_disable 00:12:49.979 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:49.979 ************************************ 00:12:49.979 END TEST nvmf_ns_masking 00:12:49.979 ************************************ 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:50.238 ************************************ 00:12:50.238 START TEST nvmf_nvme_cli 00:12:50.238 ************************************ 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:50.238 * Looking for test storage... 00:12:50.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:50.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@452 -- # prepare_net_devs 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # local -g is_hw=no 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # remove_spdk_ns 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # xtrace_disable 00:12:50.238 19:42:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # pci_devs=() 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -a pci_devs 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # pci_net_devs=() 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # pci_drivers=() 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -A pci_drivers 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@299 -- # net_devs=() 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@299 -- # local -ga net_devs 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@300 -- # e810=() 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@300 -- # local -ga e810 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # x722=() 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # local -ga x722 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # mlx=() 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # local -ga mlx 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:52.141 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:52.141 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # [[ up == up ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:52.141 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # [[ up == up ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:52.141 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # is_hw=yes 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.141 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:12:52.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:12:52.142 00:12:52.142 --- 10.0.0.2 ping statistics --- 00:12:52.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.142 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:12:52.142 00:12:52.142 --- 10.0.0.1 ping statistics --- 00:12:52.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.142 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # return 0 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@725 -- # xtrace_disable 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@485 -- # nvmfpid=1151642 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@486 -- # waitforlisten 1151642 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # '[' -z 1151642 ']' 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local max_retries=100 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@841 -- # xtrace_disable 00:12:52.142 19:42:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.400 [2024-07-24 19:42:09.540481] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:12:52.400 [2024-07-24 19:42:09.540568] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.400 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.400 [2024-07-24 19:42:09.614091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.400 [2024-07-24 19:42:09.735855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.400 [2024-07-24 19:42:09.735912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.400 [2024-07-24 19:42:09.735928] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.400 [2024-07-24 19:42:09.735941] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.400 [2024-07-24 19:42:09.735952] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.400 [2024-07-24 19:42:09.736012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.400 [2024-07-24 19:42:09.736068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.400 [2024-07-24 19:42:09.736119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.400 [2024-07-24 19:42:09.736122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@865 -- # return 0 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@731 -- # xtrace_disable 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.335 [2024-07-24 19:42:10.531896] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.335 Malloc0 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.335 Malloc1 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.335 [2024-07-24 19:42:10.617641] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:53.335 00:12:53.335 Discovery Log Number of Records 2, Generation counter 2 00:12:53.335 =====Discovery Log Entry 0====== 00:12:53.335 trtype: tcp 00:12:53.335 adrfam: ipv4 00:12:53.335 subtype: current discovery subsystem 00:12:53.335 treq: not required 00:12:53.335 portid: 0 00:12:53.335 trsvcid: 4420 00:12:53.335 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:53.335 traddr: 10.0.0.2 00:12:53.335 eflags: explicit discovery connections, duplicate discovery information 00:12:53.335 sectype: none 00:12:53.335 =====Discovery Log Entry 1====== 00:12:53.335 trtype: tcp 00:12:53.335 adrfam: ipv4 00:12:53.335 subtype: nvme subsystem 00:12:53.335 treq: not required 00:12:53.335 portid: 0 00:12:53.335 trsvcid: 4420 00:12:53.335 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:53.335 traddr: 10.0.0.2 00:12:53.335 eflags: none 00:12:53.335 sectype: none 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:53.335 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # local dev _ 00:12:53.336 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # read -r dev _ 00:12:53.336 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # nvme list 00:12:53.336 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # [[ Node == /dev/nvme* ]] 00:12:53.336 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # read -r dev _ 00:12:53.336 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # [[ --------------------- == /dev/nvme* ]] 00:12:53.336 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # read -r dev _ 00:12:53.336 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:53.593 19:42:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.159 19:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:54.159 19:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local i=0 00:12:54.159 19:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.159 19:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # [[ -n 2 ]] 00:12:54.159 19:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # nvme_device_counter=2 00:12:54.159 19:42:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # sleep 2 00:12:56.055 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:12:56.055 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:12:56.055 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.055 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # nvme_devices=2 00:12:56.055 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.055 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # return 0 00:12:56.055 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:56.055 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # local dev _ 00:12:56.055 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # read -r dev _ 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # nvme list 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # [[ Node == /dev/nvme* ]] 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # read -r dev _ 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # [[ --------------------- == /dev/nvme* ]] 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # read -r dev _ 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # echo /dev/nvme0n2 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # read -r dev _ 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # echo /dev/nvme0n1 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # read -r dev _ 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:56.056 /dev/nvme0n1 ]] 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # local dev _ 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # read -r dev _ 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # nvme list 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # [[ Node == /dev/nvme* ]] 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # read -r dev _ 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # [[ --------------------- == /dev/nvme* ]] 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # read -r dev _ 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # echo /dev/nvme0n2 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # read -r dev _ 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@529 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:56.056 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@530 -- # echo /dev/nvme0n1 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@528 -- # read -r dev _ 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # local i=0 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # lsblk -o NAME,SERIAL 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME,SERIAL 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1228 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1232 -- # return 0 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@562 -- # xtrace_disable 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # nvmfcleanup 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:56.312 rmmod nvme_tcp 00:12:56.312 rmmod nvme_fabrics 00:12:56.312 rmmod nvme_keyring 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # '[' -n 1151642 ']' 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # killprocess 1151642 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' -z 1151642 ']' 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # kill -0 1151642 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # uname 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1151642 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1151642' 00:12:56.312 killing process with pid 1151642 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # kill 1151642 00:12:56.312 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@975 -- # wait 1151642 00:12:56.878 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:12:56.878 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:12:56.878 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:12:56.878 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.878 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@282 -- # remove_spdk_ns 00:12:56.878 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.878 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.879 19:42:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.820 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:12:58.820 00:12:58.820 real 0m8.610s 00:12:58.820 user 0m17.308s 00:12:58.820 sys 0m2.210s 00:12:58.820 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # xtrace_disable 00:12:58.820 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:58.820 ************************************ 00:12:58.820 END TEST nvmf_nvme_cli 00:12:58.820 ************************************ 00:12:58.820 19:42:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:58.820 19:42:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:58.820 19:42:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:12:58.820 19:42:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:12:58.820 19:42:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:58.820 ************************************ 00:12:58.820 START TEST nvmf_vfio_user 00:12:58.820 ************************************ 00:12:58.820 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:58.820 * Looking for test storage... 00:12:58.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:58.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1152579 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1152579' 00:12:58.821 Process pid: 1152579 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1152579 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@832 -- # '[' -z 1152579 ']' 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local max_retries=100 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:58.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@841 -- # xtrace_disable 00:12:58.821 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:58.821 [2024-07-24 19:42:16.180882] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:12:58.821 [2024-07-24 19:42:16.180972] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:59.078 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.078 [2024-07-24 19:42:16.240439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:59.078 [2024-07-24 19:42:16.348021] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:59.078 [2024-07-24 19:42:16.348076] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:59.078 [2024-07-24 19:42:16.348104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:59.078 [2024-07-24 19:42:16.348115] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:59.078 [2024-07-24 19:42:16.348125] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:59.078 [2024-07-24 19:42:16.348213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:59.078 [2024-07-24 19:42:16.348282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:59.078 [2024-07-24 19:42:16.348567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:59.078 [2024-07-24 19:42:16.348573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.334 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:12:59.334 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@865 -- # return 0 00:12:59.334 19:42:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:00.266 19:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:00.523 19:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:00.523 19:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:00.523 19:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:00.524 19:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:00.524 19:42:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:00.781 Malloc1 00:13:00.781 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:01.038 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:01.295 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:01.552 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:01.552 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:01.552 19:42:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:01.809 Malloc2 00:13:01.809 19:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:02.066 19:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:02.323 19:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:02.581 19:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:02.581 19:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:02.581 19:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:02.581 19:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:02.581 19:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:02.581 19:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:02.581 [2024-07-24 19:42:19.805022] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:13:02.581 [2024-07-24 19:42:19.805069] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1153003 ] 00:13:02.581 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.582 [2024-07-24 19:42:19.840606] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:02.582 [2024-07-24 19:42:19.850213] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:02.582 [2024-07-24 19:42:19.850268] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f345b3af000 00:13:02.582 [2024-07-24 19:42:19.851206] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.582 [2024-07-24 19:42:19.852206] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.582 [2024-07-24 19:42:19.853206] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.582 [2024-07-24 19:42:19.854213] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:02.582 [2024-07-24 19:42:19.855218] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:02.582 [2024-07-24 19:42:19.856225] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.582 [2024-07-24 19:42:19.857235] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:02.582 [2024-07-24 19:42:19.858253] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:02.582 [2024-07-24 19:42:19.859263] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:02.582 [2024-07-24 19:42:19.859285] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f345b3a4000 00:13:02.582 [2024-07-24 19:42:19.860406] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:02.582 [2024-07-24 19:42:19.875902] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:02.582 [2024-07-24 19:42:19.875938] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:02.582 [2024-07-24 19:42:19.880388] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:02.582 [2024-07-24 19:42:19.880443] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:02.582 [2024-07-24 19:42:19.880540] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:02.582 [2024-07-24 19:42:19.880581] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:02.582 [2024-07-24 19:42:19.880593] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:02.582 [2024-07-24 19:42:19.881385] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:02.582 [2024-07-24 19:42:19.881411] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:02.582 [2024-07-24 19:42:19.881426] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:02.582 [2024-07-24 19:42:19.882387] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:02.582 [2024-07-24 19:42:19.882406] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:02.582 [2024-07-24 19:42:19.882419] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:02.582 [2024-07-24 19:42:19.883392] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:02.582 [2024-07-24 19:42:19.883412] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:02.582 [2024-07-24 19:42:19.884403] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:02.582 [2024-07-24 19:42:19.884423] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:02.582 [2024-07-24 19:42:19.884432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:02.582 [2024-07-24 19:42:19.884444] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:02.582 [2024-07-24 19:42:19.884567] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:02.582 [2024-07-24 19:42:19.884576] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:02.582 [2024-07-24 19:42:19.884585] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:02.582 [2024-07-24 19:42:19.885411] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:02.582 [2024-07-24 19:42:19.886418] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:02.582 [2024-07-24 19:42:19.887428] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:02.582 [2024-07-24 19:42:19.888421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:02.582 [2024-07-24 19:42:19.888534] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:02.582 [2024-07-24 19:42:19.889443] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:02.582 [2024-07-24 19:42:19.889462] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:02.582 [2024-07-24 19:42:19.889472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:02.582 [2024-07-24 19:42:19.889497] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:02.582 [2024-07-24 19:42:19.889511] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:02.582 [2024-07-24 19:42:19.889551] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:02.582 [2024-07-24 19:42:19.889561] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.582 [2024-07-24 19:42:19.889568] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.582 [2024-07-24 19:42:19.889602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.582 [2024-07-24 19:42:19.889650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:02.582 [2024-07-24 19:42:19.889666] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:02.582 [2024-07-24 19:42:19.889678] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:02.582 [2024-07-24 19:42:19.889685] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:02.582 [2024-07-24 19:42:19.889693] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:02.582 [2024-07-24 19:42:19.889701] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:02.582 [2024-07-24 19:42:19.889708] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:02.582 [2024-07-24 19:42:19.889716] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:02.582 [2024-07-24 19:42:19.889728] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:02.582 [2024-07-24 19:42:19.889747] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:02.582 [2024-07-24 19:42:19.889762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:02.582 [2024-07-24 19:42:19.889785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.582 [2024-07-24 19:42:19.889798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.582 [2024-07-24 19:42:19.889810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.582 [2024-07-24 19:42:19.889822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.582 [2024-07-24 19:42:19.889830] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:02.582 [2024-07-24 19:42:19.889845] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:02.582 [2024-07-24 19:42:19.889859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:02.582 [2024-07-24 19:42:19.889871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:02.582 [2024-07-24 19:42:19.889881] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:02.582 [2024-07-24 19:42:19.889889] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:02.582 [2024-07-24 19:42:19.889903] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:02.582 [2024-07-24 19:42:19.889914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:02.582 [2024-07-24 19:42:19.889927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:02.582 [2024-07-24 19:42:19.889940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:02.582 [2024-07-24 19:42:19.890004] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:02.582 [2024-07-24 19:42:19.890020] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:02.582 [2024-07-24 19:42:19.890037] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:02.582 [2024-07-24 19:42:19.890046] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:02.583 [2024-07-24 19:42:19.890052] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.583 [2024-07-24 19:42:19.890061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:02.583 [2024-07-24 19:42:19.890079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:02.583 [2024-07-24 19:42:19.890095] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:02.583 [2024-07-24 19:42:19.890110] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:02.583 [2024-07-24 19:42:19.890124] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:02.583 [2024-07-24 19:42:19.890136] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:02.583 [2024-07-24 19:42:19.890144] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.583 [2024-07-24 19:42:19.890150] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.583 [2024-07-24 19:42:19.890159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.583 [2024-07-24 19:42:19.890179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:02.583 [2024-07-24 19:42:19.890199] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:02.583 [2024-07-24 19:42:19.890214] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:02.583 [2024-07-24 19:42:19.890249] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:02.583 [2024-07-24 19:42:19.890259] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.583 [2024-07-24 19:42:19.890265] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.583 [2024-07-24 19:42:19.890275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.583 [2024-07-24 19:42:19.890293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:02.583 [2024-07-24 19:42:19.890307] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:02.583 [2024-07-24 19:42:19.890319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:02.583 [2024-07-24 19:42:19.890333] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:02.583 [2024-07-24 19:42:19.890347] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:02.583 [2024-07-24 19:42:19.890357] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:02.583 [2024-07-24 19:42:19.890366] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:02.583 [2024-07-24 19:42:19.890378] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:02.583 [2024-07-24 19:42:19.890386] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:02.583 [2024-07-24 19:42:19.890394] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:02.583 [2024-07-24 19:42:19.890422] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:02.583 [2024-07-24 19:42:19.890442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:02.583 [2024-07-24 19:42:19.890462] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:02.583 [2024-07-24 19:42:19.890475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:02.583 [2024-07-24 19:42:19.890491] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:02.583 [2024-07-24 19:42:19.890503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:02.583 [2024-07-24 19:42:19.890535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:02.583 [2024-07-24 19:42:19.890547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:02.583 [2024-07-24 19:42:19.890571] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:02.583 [2024-07-24 19:42:19.890596] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:02.583 [2024-07-24 19:42:19.890603] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:02.583 [2024-07-24 19:42:19.890608] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:02.583 [2024-07-24 19:42:19.890614] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:02.583 [2024-07-24 19:42:19.890623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:02.583 [2024-07-24 19:42:19.890636] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:02.583 [2024-07-24 19:42:19.890643] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:02.583 [2024-07-24 19:42:19.890649] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.583 [2024-07-24 19:42:19.890658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:02.583 [2024-07-24 19:42:19.890669] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:02.583 [2024-07-24 19:42:19.890677] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:02.583 [2024-07-24 19:42:19.890682] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.583 [2024-07-24 19:42:19.890691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:02.583 [2024-07-24 19:42:19.890703] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:02.583 [2024-07-24 19:42:19.890711] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:02.583 [2024-07-24 19:42:19.890717] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:02.583 [2024-07-24 19:42:19.890726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:02.583 [2024-07-24 19:42:19.890740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:02.583 [2024-07-24 19:42:19.890761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:02.583 [2024-07-24 19:42:19.890780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:02.583 [2024-07-24 19:42:19.890792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:02.583 ===================================================== 00:13:02.583 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:02.583 ===================================================== 00:13:02.583 Controller Capabilities/Features 00:13:02.583 ================================ 00:13:02.583 Vendor ID: 4e58 00:13:02.583 Subsystem Vendor ID: 4e58 00:13:02.583 Serial Number: SPDK1 00:13:02.583 Model Number: SPDK bdev Controller 00:13:02.583 Firmware Version: 24.09 00:13:02.583 Recommended Arb Burst: 6 00:13:02.583 IEEE OUI Identifier: 8d 6b 50 00:13:02.583 Multi-path I/O 00:13:02.583 May have multiple subsystem ports: Yes 00:13:02.583 May have multiple controllers: Yes 00:13:02.583 Associated with SR-IOV VF: No 00:13:02.583 Max Data Transfer Size: 131072 00:13:02.583 Max Number of Namespaces: 32 00:13:02.583 Max Number of I/O Queues: 127 00:13:02.583 NVMe Specification Version (VS): 1.3 00:13:02.583 NVMe Specification Version (Identify): 1.3 00:13:02.583 Maximum Queue Entries: 256 00:13:02.583 Contiguous Queues Required: Yes 00:13:02.583 Arbitration Mechanisms Supported 00:13:02.583 Weighted Round Robin: Not Supported 00:13:02.583 Vendor Specific: Not Supported 00:13:02.583 Reset Timeout: 15000 ms 00:13:02.583 Doorbell Stride: 4 bytes 00:13:02.583 NVM Subsystem Reset: Not Supported 00:13:02.583 Command Sets Supported 00:13:02.583 NVM Command Set: Supported 00:13:02.583 Boot Partition: Not Supported 00:13:02.583 Memory Page Size Minimum: 4096 bytes 00:13:02.583 Memory Page Size Maximum: 4096 bytes 00:13:02.583 Persistent Memory Region: Not Supported 00:13:02.583 Optional Asynchronous Events Supported 00:13:02.583 Namespace Attribute Notices: Supported 00:13:02.583 Firmware Activation Notices: Not Supported 00:13:02.583 ANA Change Notices: Not Supported 00:13:02.583 PLE Aggregate Log Change Notices: Not Supported 00:13:02.583 LBA Status Info Alert Notices: Not Supported 00:13:02.583 EGE Aggregate Log Change Notices: Not Supported 00:13:02.583 Normal NVM Subsystem Shutdown event: Not Supported 00:13:02.583 Zone Descriptor Change Notices: Not Supported 00:13:02.583 Discovery Log Change Notices: Not Supported 00:13:02.583 Controller Attributes 00:13:02.583 128-bit Host Identifier: Supported 00:13:02.583 Non-Operational Permissive Mode: Not Supported 00:13:02.583 NVM Sets: Not Supported 00:13:02.583 Read Recovery Levels: Not Supported 00:13:02.583 Endurance Groups: Not Supported 00:13:02.583 Predictable Latency Mode: Not Supported 00:13:02.583 Traffic Based Keep ALive: Not Supported 00:13:02.583 Namespace Granularity: Not Supported 00:13:02.583 SQ Associations: Not Supported 00:13:02.583 UUID List: Not Supported 00:13:02.583 Multi-Domain Subsystem: Not Supported 00:13:02.583 Fixed Capacity Management: Not Supported 00:13:02.584 Variable Capacity Management: Not Supported 00:13:02.584 Delete Endurance Group: Not Supported 00:13:02.584 Delete NVM Set: Not Supported 00:13:02.584 Extended LBA Formats Supported: Not Supported 00:13:02.584 Flexible Data Placement Supported: Not Supported 00:13:02.584 00:13:02.584 Controller Memory Buffer Support 00:13:02.584 ================================ 00:13:02.584 Supported: No 00:13:02.584 00:13:02.584 Persistent Memory Region Support 00:13:02.584 ================================ 00:13:02.584 Supported: No 00:13:02.584 00:13:02.584 Admin Command Set Attributes 00:13:02.584 ============================ 00:13:02.584 Security Send/Receive: Not Supported 00:13:02.584 Format NVM: Not Supported 00:13:02.584 Firmware Activate/Download: Not Supported 00:13:02.584 Namespace Management: Not Supported 00:13:02.584 Device Self-Test: Not Supported 00:13:02.584 Directives: Not Supported 00:13:02.584 NVMe-MI: Not Supported 00:13:02.584 Virtualization Management: Not Supported 00:13:02.584 Doorbell Buffer Config: Not Supported 00:13:02.584 Get LBA Status Capability: Not Supported 00:13:02.584 Command & Feature Lockdown Capability: Not Supported 00:13:02.584 Abort Command Limit: 4 00:13:02.584 Async Event Request Limit: 4 00:13:02.584 Number of Firmware Slots: N/A 00:13:02.584 Firmware Slot 1 Read-Only: N/A 00:13:02.584 Firmware Activation Without Reset: N/A 00:13:02.584 Multiple Update Detection Support: N/A 00:13:02.584 Firmware Update Granularity: No Information Provided 00:13:02.584 Per-Namespace SMART Log: No 00:13:02.584 Asymmetric Namespace Access Log Page: Not Supported 00:13:02.584 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:02.584 Command Effects Log Page: Supported 00:13:02.584 Get Log Page Extended Data: Supported 00:13:02.584 Telemetry Log Pages: Not Supported 00:13:02.584 Persistent Event Log Pages: Not Supported 00:13:02.584 Supported Log Pages Log Page: May Support 00:13:02.584 Commands Supported & Effects Log Page: Not Supported 00:13:02.584 Feature Identifiers & Effects Log Page:May Support 00:13:02.584 NVMe-MI Commands & Effects Log Page: May Support 00:13:02.584 Data Area 4 for Telemetry Log: Not Supported 00:13:02.584 Error Log Page Entries Supported: 128 00:13:02.584 Keep Alive: Supported 00:13:02.584 Keep Alive Granularity: 10000 ms 00:13:02.584 00:13:02.584 NVM Command Set Attributes 00:13:02.584 ========================== 00:13:02.584 Submission Queue Entry Size 00:13:02.584 Max: 64 00:13:02.584 Min: 64 00:13:02.584 Completion Queue Entry Size 00:13:02.584 Max: 16 00:13:02.584 Min: 16 00:13:02.584 Number of Namespaces: 32 00:13:02.584 Compare Command: Supported 00:13:02.584 Write Uncorrectable Command: Not Supported 00:13:02.584 Dataset Management Command: Supported 00:13:02.584 Write Zeroes Command: Supported 00:13:02.584 Set Features Save Field: Not Supported 00:13:02.584 Reservations: Not Supported 00:13:02.584 Timestamp: Not Supported 00:13:02.584 Copy: Supported 00:13:02.584 Volatile Write Cache: Present 00:13:02.584 Atomic Write Unit (Normal): 1 00:13:02.584 Atomic Write Unit (PFail): 1 00:13:02.584 Atomic Compare & Write Unit: 1 00:13:02.584 Fused Compare & Write: Supported 00:13:02.584 Scatter-Gather List 00:13:02.584 SGL Command Set: Supported (Dword aligned) 00:13:02.584 SGL Keyed: Not Supported 00:13:02.584 SGL Bit Bucket Descriptor: Not Supported 00:13:02.584 SGL Metadata Pointer: Not Supported 00:13:02.584 Oversized SGL: Not Supported 00:13:02.584 SGL Metadata Address: Not Supported 00:13:02.584 SGL Offset: Not Supported 00:13:02.584 Transport SGL Data Block: Not Supported 00:13:02.584 Replay Protected Memory Block: Not Supported 00:13:02.584 00:13:02.584 Firmware Slot Information 00:13:02.584 ========================= 00:13:02.584 Active slot: 1 00:13:02.584 Slot 1 Firmware Revision: 24.09 00:13:02.584 00:13:02.584 00:13:02.584 Commands Supported and Effects 00:13:02.584 ============================== 00:13:02.584 Admin Commands 00:13:02.584 -------------- 00:13:02.584 Get Log Page (02h): Supported 00:13:02.584 Identify (06h): Supported 00:13:02.584 Abort (08h): Supported 00:13:02.584 Set Features (09h): Supported 00:13:02.584 Get Features (0Ah): Supported 00:13:02.584 Asynchronous Event Request (0Ch): Supported 00:13:02.584 Keep Alive (18h): Supported 00:13:02.584 I/O Commands 00:13:02.584 ------------ 00:13:02.584 Flush (00h): Supported LBA-Change 00:13:02.584 Write (01h): Supported LBA-Change 00:13:02.584 Read (02h): Supported 00:13:02.584 Compare (05h): Supported 00:13:02.584 Write Zeroes (08h): Supported LBA-Change 00:13:02.584 Dataset Management (09h): Supported LBA-Change 00:13:02.584 Copy (19h): Supported LBA-Change 00:13:02.584 00:13:02.584 Error Log 00:13:02.584 ========= 00:13:02.584 00:13:02.584 Arbitration 00:13:02.584 =========== 00:13:02.584 Arbitration Burst: 1 00:13:02.584 00:13:02.584 Power Management 00:13:02.584 ================ 00:13:02.584 Number of Power States: 1 00:13:02.584 Current Power State: Power State #0 00:13:02.584 Power State #0: 00:13:02.584 Max Power: 0.00 W 00:13:02.584 Non-Operational State: Operational 00:13:02.584 Entry Latency: Not Reported 00:13:02.584 Exit Latency: Not Reported 00:13:02.584 Relative Read Throughput: 0 00:13:02.584 Relative Read Latency: 0 00:13:02.584 Relative Write Throughput: 0 00:13:02.584 Relative Write Latency: 0 00:13:02.584 Idle Power: Not Reported 00:13:02.584 Active Power: Not Reported 00:13:02.584 Non-Operational Permissive Mode: Not Supported 00:13:02.584 00:13:02.584 Health Information 00:13:02.584 ================== 00:13:02.584 Critical Warnings: 00:13:02.584 Available Spare Space: OK 00:13:02.584 Temperature: OK 00:13:02.584 Device Reliability: OK 00:13:02.584 Read Only: No 00:13:02.584 Volatile Memory Backup: OK 00:13:02.584 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:02.584 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:02.584 Available Spare: 0% 00:13:02.584 Available Sp[2024-07-24 19:42:19.890915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:02.584 [2024-07-24 19:42:19.890932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:02.584 [2024-07-24 19:42:19.890976] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:02.584 [2024-07-24 19:42:19.890994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.584 [2024-07-24 19:42:19.891004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.584 [2024-07-24 19:42:19.891014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.584 [2024-07-24 19:42:19.891024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.584 [2024-07-24 19:42:19.892255] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:02.584 [2024-07-24 19:42:19.892277] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:02.584 [2024-07-24 19:42:19.892454] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:02.584 [2024-07-24 19:42:19.892525] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:02.584 [2024-07-24 19:42:19.892554] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:02.584 [2024-07-24 19:42:19.893465] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:02.584 [2024-07-24 19:42:19.893489] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:02.584 [2024-07-24 19:42:19.893561] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:02.584 [2024-07-24 19:42:19.897252] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:02.584 are Threshold: 0% 00:13:02.584 Life Percentage Used: 0% 00:13:02.584 Data Units Read: 0 00:13:02.584 Data Units Written: 0 00:13:02.584 Host Read Commands: 0 00:13:02.584 Host Write Commands: 0 00:13:02.584 Controller Busy Time: 0 minutes 00:13:02.584 Power Cycles: 0 00:13:02.584 Power On Hours: 0 hours 00:13:02.584 Unsafe Shutdowns: 0 00:13:02.584 Unrecoverable Media Errors: 0 00:13:02.584 Lifetime Error Log Entries: 0 00:13:02.585 Warning Temperature Time: 0 minutes 00:13:02.585 Critical Temperature Time: 0 minutes 00:13:02.585 00:13:02.585 Number of Queues 00:13:02.585 ================ 00:13:02.585 Number of I/O Submission Queues: 127 00:13:02.585 Number of I/O Completion Queues: 127 00:13:02.585 00:13:02.585 Active Namespaces 00:13:02.585 ================= 00:13:02.585 Namespace ID:1 00:13:02.585 Error Recovery Timeout: Unlimited 00:13:02.585 Command Set Identifier: NVM (00h) 00:13:02.585 Deallocate: Supported 00:13:02.585 Deallocated/Unwritten Error: Not Supported 00:13:02.585 Deallocated Read Value: Unknown 00:13:02.585 Deallocate in Write Zeroes: Not Supported 00:13:02.585 Deallocated Guard Field: 0xFFFF 00:13:02.585 Flush: Supported 00:13:02.585 Reservation: Supported 00:13:02.585 Namespace Sharing Capabilities: Multiple Controllers 00:13:02.585 Size (in LBAs): 131072 (0GiB) 00:13:02.585 Capacity (in LBAs): 131072 (0GiB) 00:13:02.585 Utilization (in LBAs): 131072 (0GiB) 00:13:02.585 NGUID: 51B66193708446D19EA48CEB8B535664 00:13:02.585 UUID: 51b66193-7084-46d1-9ea4-8ceb8b535664 00:13:02.585 Thin Provisioning: Not Supported 00:13:02.585 Per-NS Atomic Units: Yes 00:13:02.585 Atomic Boundary Size (Normal): 0 00:13:02.585 Atomic Boundary Size (PFail): 0 00:13:02.585 Atomic Boundary Offset: 0 00:13:02.585 Maximum Single Source Range Length: 65535 00:13:02.585 Maximum Copy Length: 65535 00:13:02.585 Maximum Source Range Count: 1 00:13:02.585 NGUID/EUI64 Never Reused: No 00:13:02.585 Namespace Write Protected: No 00:13:02.585 Number of LBA Formats: 1 00:13:02.585 Current LBA Format: LBA Format #00 00:13:02.585 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:02.585 00:13:02.585 19:42:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:02.841 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.841 [2024-07-24 19:42:20.127101] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:08.104 Initializing NVMe Controllers 00:13:08.104 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:08.104 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:08.104 Initialization complete. Launching workers. 00:13:08.104 ======================================================== 00:13:08.104 Latency(us) 00:13:08.104 Device Information : IOPS MiB/s Average min max 00:13:08.104 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33882.91 132.36 3777.15 1190.16 7555.71 00:13:08.104 ======================================================== 00:13:08.104 Total : 33882.91 132.36 3777.15 1190.16 7555.71 00:13:08.104 00:13:08.104 [2024-07-24 19:42:25.149758] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:08.104 19:42:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:08.104 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.104 [2024-07-24 19:42:25.380880] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:13.368 Initializing NVMe Controllers 00:13:13.368 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:13.368 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:13.368 Initialization complete. Launching workers. 00:13:13.368 ======================================================== 00:13:13.368 Latency(us) 00:13:13.368 Device Information : IOPS MiB/s Average min max 00:13:13.368 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16005.00 62.52 8005.90 4959.60 15958.77 00:13:13.368 ======================================================== 00:13:13.368 Total : 16005.00 62.52 8005.90 4959.60 15958.77 00:13:13.368 00:13:13.368 [2024-07-24 19:42:30.416430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:13.368 19:42:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:13.368 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.368 [2024-07-24 19:42:30.617474] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:18.642 [2024-07-24 19:42:35.683560] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:18.642 Initializing NVMe Controllers 00:13:18.642 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:18.642 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:18.642 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:18.642 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:18.642 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:18.642 Initialization complete. Launching workers. 00:13:18.642 Starting thread on core 2 00:13:18.642 Starting thread on core 3 00:13:18.642 Starting thread on core 1 00:13:18.642 19:42:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:18.642 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.642 [2024-07-24 19:42:35.983664] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:21.932 [2024-07-24 19:42:39.044962] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:21.932 Initializing NVMe Controllers 00:13:21.932 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:21.932 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:21.932 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:21.932 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:21.933 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:21.933 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:21.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:21.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:21.933 Initialization complete. Launching workers. 00:13:21.933 Starting thread on core 1 with urgent priority queue 00:13:21.933 Starting thread on core 2 with urgent priority queue 00:13:21.933 Starting thread on core 3 with urgent priority queue 00:13:21.933 Starting thread on core 0 with urgent priority queue 00:13:21.933 SPDK bdev Controller (SPDK1 ) core 0: 5724.67 IO/s 17.47 secs/100000 ios 00:13:21.933 SPDK bdev Controller (SPDK1 ) core 1: 5632.33 IO/s 17.75 secs/100000 ios 00:13:21.933 SPDK bdev Controller (SPDK1 ) core 2: 5709.67 IO/s 17.51 secs/100000 ios 00:13:21.933 SPDK bdev Controller (SPDK1 ) core 3: 5157.33 IO/s 19.39 secs/100000 ios 00:13:21.933 ======================================================== 00:13:21.933 00:13:21.933 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:21.933 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.191 [2024-07-24 19:42:39.346340] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:22.191 Initializing NVMe Controllers 00:13:22.191 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.191 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:22.191 Namespace ID: 1 size: 0GB 00:13:22.191 Initialization complete. 00:13:22.191 INFO: using host memory buffer for IO 00:13:22.191 Hello world! 00:13:22.191 [2024-07-24 19:42:39.381943] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:22.191 19:42:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:22.191 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.449 [2024-07-24 19:42:39.678800] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:23.441 Initializing NVMe Controllers 00:13:23.441 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:23.441 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:23.441 Initialization complete. Launching workers. 00:13:23.441 submit (in ns) avg, min, max = 6047.3, 3494.4, 4017721.1 00:13:23.441 complete (in ns) avg, min, max = 29158.4, 2058.9, 4019078.9 00:13:23.441 00:13:23.441 Submit histogram 00:13:23.441 ================ 00:13:23.441 Range in us Cumulative Count 00:13:23.441 3.484 - 3.508: 0.0076% ( 1) 00:13:23.441 3.508 - 3.532: 0.1987% ( 25) 00:13:23.441 3.532 - 3.556: 1.2995% ( 144) 00:13:23.441 3.556 - 3.579: 3.5316% ( 292) 00:13:23.441 3.579 - 3.603: 8.6225% ( 666) 00:13:23.441 3.603 - 3.627: 16.7482% ( 1063) 00:13:23.441 3.627 - 3.650: 27.4117% ( 1395) 00:13:23.441 3.650 - 3.674: 36.5235% ( 1192) 00:13:23.441 3.674 - 3.698: 44.6338% ( 1061) 00:13:23.441 3.698 - 3.721: 51.2689% ( 868) 00:13:23.441 3.721 - 3.745: 56.7421% ( 716) 00:13:23.441 3.745 - 3.769: 61.3821% ( 607) 00:13:23.441 3.769 - 3.793: 65.5022% ( 539) 00:13:23.441 3.793 - 3.816: 69.0491% ( 464) 00:13:23.441 3.816 - 3.840: 72.3513% ( 432) 00:13:23.441 3.840 - 3.864: 75.6001% ( 425) 00:13:23.441 3.864 - 3.887: 79.3992% ( 497) 00:13:23.441 3.887 - 3.911: 83.0836% ( 482) 00:13:23.441 3.911 - 3.935: 85.8737% ( 365) 00:13:23.441 3.935 - 3.959: 87.8994% ( 265) 00:13:23.441 3.959 - 3.982: 89.7416% ( 241) 00:13:23.441 3.982 - 4.006: 91.4998% ( 230) 00:13:23.441 4.006 - 4.030: 92.5317% ( 135) 00:13:23.441 4.030 - 4.053: 93.4719% ( 123) 00:13:23.441 4.053 - 4.077: 94.4045% ( 122) 00:13:23.441 4.077 - 4.101: 94.9778% ( 75) 00:13:23.441 4.101 - 4.124: 95.5588% ( 76) 00:13:23.441 4.124 - 4.148: 96.0939% ( 70) 00:13:23.441 4.148 - 4.172: 96.3996% ( 40) 00:13:23.441 4.172 - 4.196: 96.6519% ( 33) 00:13:23.441 4.196 - 4.219: 96.8506% ( 26) 00:13:23.441 4.219 - 4.243: 97.0188% ( 22) 00:13:23.441 4.243 - 4.267: 97.1258% ( 14) 00:13:23.441 4.267 - 4.290: 97.2405% ( 15) 00:13:23.441 4.290 - 4.314: 97.3475% ( 14) 00:13:23.441 4.314 - 4.338: 97.3857% ( 5) 00:13:23.441 4.338 - 4.361: 97.4774% ( 12) 00:13:23.441 4.361 - 4.385: 97.5539% ( 10) 00:13:23.441 4.385 - 4.409: 97.6380% ( 11) 00:13:23.441 4.409 - 4.433: 97.7068% ( 9) 00:13:23.441 4.433 - 4.456: 97.7373% ( 4) 00:13:23.441 4.456 - 4.480: 97.7603% ( 3) 00:13:23.441 4.480 - 4.504: 97.7679% ( 1) 00:13:23.441 4.504 - 4.527: 97.7909% ( 3) 00:13:23.441 4.527 - 4.551: 97.7985% ( 1) 00:13:23.441 4.599 - 4.622: 97.8061% ( 1) 00:13:23.441 4.622 - 4.646: 97.8367% ( 4) 00:13:23.441 4.646 - 4.670: 97.8444% ( 1) 00:13:23.441 4.693 - 4.717: 97.8520% ( 1) 00:13:23.441 4.741 - 4.764: 97.8979% ( 6) 00:13:23.441 4.764 - 4.788: 97.9514% ( 7) 00:13:23.441 4.788 - 4.812: 97.9743% ( 3) 00:13:23.441 4.812 - 4.836: 98.0049% ( 4) 00:13:23.441 4.836 - 4.859: 98.0431% ( 5) 00:13:23.441 4.859 - 4.883: 98.1043% ( 8) 00:13:23.441 4.883 - 4.907: 98.1348% ( 4) 00:13:23.441 4.907 - 4.930: 98.1884% ( 7) 00:13:23.441 4.930 - 4.954: 98.2495% ( 8) 00:13:23.441 4.954 - 4.978: 98.3107% ( 8) 00:13:23.441 4.978 - 5.001: 98.3412% ( 4) 00:13:23.441 5.001 - 5.025: 98.3718% ( 4) 00:13:23.441 5.025 - 5.049: 98.3871% ( 2) 00:13:23.441 5.049 - 5.073: 98.4330% ( 6) 00:13:23.441 5.073 - 5.096: 98.4482% ( 2) 00:13:23.441 5.096 - 5.120: 98.4559% ( 1) 00:13:23.441 5.120 - 5.144: 98.5094% ( 7) 00:13:23.441 5.144 - 5.167: 98.5247% ( 2) 00:13:23.441 5.167 - 5.191: 98.5476% ( 3) 00:13:23.441 5.191 - 5.215: 98.5706% ( 3) 00:13:23.441 5.215 - 5.239: 98.5858% ( 2) 00:13:23.441 5.239 - 5.262: 98.5935% ( 1) 00:13:23.441 5.262 - 5.286: 98.6088% ( 2) 00:13:23.441 5.310 - 5.333: 98.6394% ( 4) 00:13:23.441 5.404 - 5.428: 98.6470% ( 1) 00:13:23.441 5.428 - 5.452: 98.6546% ( 1) 00:13:23.441 5.452 - 5.476: 98.6623% ( 1) 00:13:23.441 5.499 - 5.523: 98.6699% ( 1) 00:13:23.441 5.523 - 5.547: 98.6776% ( 1) 00:13:23.441 5.547 - 5.570: 98.6852% ( 1) 00:13:23.441 5.665 - 5.689: 98.6929% ( 1) 00:13:23.441 5.902 - 5.926: 98.7005% ( 1) 00:13:23.441 6.732 - 6.779: 98.7158% ( 2) 00:13:23.441 6.827 - 6.874: 98.7387% ( 3) 00:13:23.441 7.064 - 7.111: 98.7464% ( 1) 00:13:23.441 7.111 - 7.159: 98.7617% ( 2) 00:13:23.441 7.301 - 7.348: 98.7693% ( 1) 00:13:23.441 7.490 - 7.538: 98.7769% ( 1) 00:13:23.441 7.633 - 7.680: 98.7846% ( 1) 00:13:23.441 7.680 - 7.727: 98.7999% ( 2) 00:13:23.441 7.727 - 7.775: 98.8075% ( 1) 00:13:23.441 7.822 - 7.870: 98.8152% ( 1) 00:13:23.441 7.870 - 7.917: 98.8305% ( 2) 00:13:23.441 7.964 - 8.012: 98.8457% ( 2) 00:13:23.441 8.059 - 8.107: 98.8687% ( 3) 00:13:23.441 8.154 - 8.201: 98.8916% ( 3) 00:13:23.441 8.296 - 8.344: 98.8993% ( 1) 00:13:23.441 8.344 - 8.391: 98.9069% ( 1) 00:13:23.441 8.439 - 8.486: 98.9145% ( 1) 00:13:23.441 8.486 - 8.533: 98.9222% ( 1) 00:13:23.441 8.581 - 8.628: 98.9375% ( 2) 00:13:23.441 8.628 - 8.676: 98.9451% ( 1) 00:13:23.441 8.770 - 8.818: 98.9528% ( 1) 00:13:23.441 8.865 - 8.913: 98.9604% ( 1) 00:13:23.441 8.913 - 8.960: 98.9680% ( 1) 00:13:23.441 8.960 - 9.007: 98.9833% ( 2) 00:13:23.441 9.055 - 9.102: 98.9986% ( 2) 00:13:23.441 9.150 - 9.197: 99.0063% ( 1) 00:13:23.441 9.244 - 9.292: 99.0139% ( 1) 00:13:23.441 9.339 - 9.387: 99.0216% ( 1) 00:13:23.441 9.671 - 9.719: 99.0368% ( 2) 00:13:23.441 9.813 - 9.861: 99.0445% ( 1) 00:13:23.441 9.861 - 9.908: 99.0521% ( 1) 00:13:23.441 9.956 - 10.003: 99.0598% ( 1) 00:13:23.441 10.193 - 10.240: 99.0674% ( 1) 00:13:23.441 10.240 - 10.287: 99.0751% ( 1) 00:13:23.441 10.335 - 10.382: 99.0827% ( 1) 00:13:23.441 10.572 - 10.619: 99.0980% ( 2) 00:13:23.441 10.856 - 10.904: 99.1056% ( 1) 00:13:23.441 11.141 - 11.188: 99.1133% ( 1) 00:13:23.441 11.378 - 11.425: 99.1209% ( 1) 00:13:23.441 11.425 - 11.473: 99.1362% ( 2) 00:13:23.441 11.615 - 11.662: 99.1439% ( 1) 00:13:23.441 11.947 - 11.994: 99.1515% ( 1) 00:13:23.441 12.990 - 13.084: 99.1591% ( 1) 00:13:23.441 13.938 - 14.033: 99.1668% ( 1) 00:13:23.441 14.412 - 14.507: 99.1744% ( 1) 00:13:23.441 14.791 - 14.886: 99.1821% ( 1) 00:13:23.441 17.161 - 17.256: 99.1897% ( 1) 00:13:23.441 17.256 - 17.351: 99.1974% ( 1) 00:13:23.441 17.351 - 17.446: 99.2279% ( 4) 00:13:23.441 17.446 - 17.541: 99.2509% ( 3) 00:13:23.441 17.541 - 17.636: 99.2585% ( 1) 00:13:23.441 17.636 - 17.730: 99.3273% ( 9) 00:13:23.441 17.730 - 17.825: 99.3579% ( 4) 00:13:23.441 17.825 - 17.920: 99.4114% ( 7) 00:13:23.441 17.920 - 18.015: 99.4343% ( 3) 00:13:23.441 18.015 - 18.110: 99.4802% ( 6) 00:13:23.441 18.110 - 18.204: 99.5566% ( 10) 00:13:23.441 18.204 - 18.299: 99.6484% ( 12) 00:13:23.441 18.299 - 18.394: 99.7095% ( 8) 00:13:23.441 18.394 - 18.489: 99.7554% ( 6) 00:13:23.441 18.489 - 18.584: 99.7630% ( 1) 00:13:23.441 18.584 - 18.679: 99.7936% ( 4) 00:13:23.441 18.679 - 18.773: 99.8165% ( 3) 00:13:23.441 18.773 - 18.868: 99.8471% ( 4) 00:13:23.441 18.868 - 18.963: 99.8548% ( 1) 00:13:23.441 18.963 - 19.058: 99.8701% ( 2) 00:13:23.441 19.058 - 19.153: 99.8777% ( 1) 00:13:23.441 19.153 - 19.247: 99.8853% ( 1) 00:13:23.441 19.247 - 19.342: 99.8930% ( 1) 00:13:23.441 19.437 - 19.532: 99.9083% ( 2) 00:13:23.441 20.006 - 20.101: 99.9159% ( 1) 00:13:23.441 21.807 - 21.902: 99.9236% ( 1) 00:13:23.441 22.471 - 22.566: 99.9312% ( 1) 00:13:23.441 24.462 - 24.652: 99.9388% ( 1) 00:13:23.441 25.410 - 25.600: 99.9465% ( 1) 00:13:23.441 3980.705 - 4004.978: 99.9694% ( 3) 00:13:23.441 4004.978 - 4029.250: 100.0000% ( 4) 00:13:23.441 00:13:23.441 Complete histogram 00:13:23.441 ================== 00:13:23.441 Range in us Cumulative Count 00:13:23.441 2.050 - 2.062: 0.0076% ( 1) 00:13:23.441 2.062 - 2.074: 4.1202% ( 538) 00:13:23.441 2.074 - 2.086: 34.5895% ( 3986) 00:13:23.441 2.086 - 2.098: 41.8361% ( 948) 00:13:23.441 2.098 - 2.110: 47.2099% ( 703) 00:13:23.441 2.110 - 2.121: 57.5524% ( 1353) 00:13:23.441 2.121 - 2.133: 59.7309% ( 285) 00:13:23.441 2.133 - 2.145: 65.9762% ( 817) 00:13:23.441 2.145 - 2.157: 76.3721% ( 1360) 00:13:23.442 2.157 - 2.169: 78.1303% ( 230) 00:13:23.442 2.169 - 2.181: 81.7077% ( 468) 00:13:23.442 2.181 - 2.193: 85.9884% ( 560) 00:13:23.442 2.193 - 2.204: 87.0738% ( 142) 00:13:23.442 2.204 - 2.216: 88.4727% ( 183) 00:13:23.442 2.216 - 2.228: 91.3545% ( 377) 00:13:23.442 2.228 - 2.240: 92.9904% ( 214) 00:13:23.442 2.240 - 2.252: 93.9382% ( 124) 00:13:23.442 2.252 - 2.264: 94.7256% ( 103) 00:13:23.442 2.264 - 2.276: 94.9549% ( 30) 00:13:23.442 2.276 - 2.287: 95.1231% ( 22) 00:13:23.442 2.287 - 2.299: 95.4288% ( 40) 00:13:23.442 2.299 - 2.311: 95.8569% ( 56) 00:13:23.442 2.311 - 2.323: 96.0098% ( 20) 00:13:23.442 2.323 - 2.335: 96.0480% ( 5) 00:13:23.442 2.335 - 2.347: 96.0862% ( 5) 00:13:23.442 2.347 - 2.359: 96.2162% ( 17) 00:13:23.442 2.359 - 2.370: 96.5678% ( 46) 00:13:23.442 2.370 - 2.382: 96.8965% ( 43) 00:13:23.442 2.382 - 2.394: 97.1640% ( 35) 00:13:23.442 2.394 - 2.406: 97.4392% ( 36) 00:13:23.442 2.406 - 2.418: 97.6380% ( 26) 00:13:23.442 2.418 - 2.430: 97.8138% ( 23) 00:13:23.442 2.430 - 2.441: 97.9590% ( 19) 00:13:23.442 2.441 - 2.453: 98.0584% ( 13) 00:13:23.442 2.453 - 2.465: 98.1654% ( 14) 00:13:23.442 2.465 - 2.477: 98.2419% ( 10) 00:13:23.442 2.477 - 2.489: 98.3030% ( 8) 00:13:23.442 2.489 - 2.501: 98.3107% ( 1) 00:13:23.442 2.501 - 2.513: 98.3259% ( 2) 00:13:23.442 2.513 - 2.524: 98.3489% ( 3) 00:13:23.442 2.524 - 2.536: 98.3718% ( 3) 00:13:23.442 2.536 - 2.548: 98.3871% ( 2) 00:13:23.442 2.548 - 2.560: 98.3947% ( 1) 00:13:23.442 2.584 - 2.596: 98.4024% ( 1) 00:13:23.442 2.596 - 2.607: 98.4177% ( 2) 00:13:23.442 2.607 - 2.619: 98.4330% ( 2) 00:13:23.442 2.631 - 2.643: 98.4406% ( 1) 00:13:23.442 2.667 - 2.679: 98.4482% ( 1) 00:13:23.442 2.679 - 2.690: 98.4559% ( 1) 00:13:23.442 2.833 - 2.844: 98.4635% ( 1) 00:13:23.442 3.319 - 3.342: 98.4712% ( 1) 00:13:23.442 3.390 - 3.413: 98.4788% ( 1) 00:13:23.442 3.484 - 3.508: 98.4941% ( 2) 00:13:23.442 3.556 - 3.579: 98.5018% ( 1) 00:13:23.442 3.579 - 3.603: 98.5094% ( 1) 00:13:23.442 3.603 - 3.627: 98.5170% ( 1) 00:13:23.442 3.650 - 3.674: 98.5247% ( 1) 00:13:23.442 3.745 - 3.769: 98.5400% ( 2) 00:13:23.442 3.769 - 3.793: 98.5476% ( 1) 00:13:23.442 3.864 - 3.887: 98.5629% ( 2) 00:13:23.442 3.911 - 3.935: 98.5706% ( 1) 00:13:23.442 3.959 - 3.982: 98.5782% ( 1) 00:13:23.442 4.006 - 4.030: 98.5858% ( 1) 00:13:23.442 4.172 - 4.196: 98.5935% ( 1) 00:13:23.442 5.215 - 5.239: 98.6011% ( 1) 00:13:23.442 5.547 - 5.570: 98.6164% ( 2) 00:13:23.442 5.618 - 5.641: 98.6241% ( 1) 00:13:23.442 6.116 - 6.163: 98.6317% ( 1) 00:13:23.442 6.447 - 6.495: 98.6470% ( 2) 00:13:23.442 6.495 - 6.542: 98.6546% ( 1) 00:13:23.442 6.542 - 6.590: 98.6623% ( 1) 00:13:23.442 6.684 - 6.732: 98.6776% ( 2) 00:13:23.442 6.779 - 6.827: 98.6852% ( 1) 00:13:23.442 7.064 - 7.111: 98.6929% ( 1) 00:13:23.442 7.111 - 7.159: 98.7005% ( 1) 00:13:23.442 7.159 - 7.206: 98.7081% ( 1) 00:13:23.442 7.253 - 7.301: 98.7158% ( 1) 00:13:23.442 7.301 - 7.348: 98.7234% ( 1) 00:13:23.442 7.348 - 7.396: 98.7311% ( 1) 00:13:23.442 8.201 - 8.249: 98.7387% ( 1) 00:13:23.442 8.723 - 8.770: 98.7464% ( 1) 00:13:23.442 9.576 - 9.624: 98.7540% ( 1) 00:13:23.442 15.455 - 15.550: 98.7617% ( 1) 00:13:23.442 15.550 - 15.644: 98.7693% ( 1) 00:13:23.442 15.644 - 15.739: 98.7769% ( 1) 00:13:23.442 15.739 - 15.834: 98.8305% ( 7) 00:13:23.442 15.834 - 15.929: 98.8534% ( 3) 00:13:23.442 15.929 - 16.024: 98.9069% ( 7) 00:13:23.442 16.024 - 16.119: 98.9375% ( 4) 00:13:23.442 16.119 - 16.213: 98.9528% ( 2) 00:13:23.442 16.308 - 16.403: 98.9757% ( 3) 00:13:23.442 16.403 - 16.498: 99.0139% ( 5) 00:13:23.442 16.498 - 16.593: 99.0598% ( 6) 00:13:23.442 16.593 - 16.687: 9[2024-07-24 19:42:40.700099] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:23.442 9.1133% ( 7) 00:13:23.442 16.687 - 16.782: 99.1362% ( 3) 00:13:23.442 16.782 - 16.877: 99.1439% ( 1) 00:13:23.442 16.972 - 17.067: 99.1668% ( 3) 00:13:23.442 17.067 - 17.161: 99.1744% ( 1) 00:13:23.442 17.161 - 17.256: 99.1897% ( 2) 00:13:23.442 17.256 - 17.351: 99.1974% ( 1) 00:13:23.442 17.351 - 17.446: 99.2050% ( 1) 00:13:23.442 17.541 - 17.636: 99.2203% ( 2) 00:13:23.442 17.636 - 17.730: 99.2279% ( 1) 00:13:23.442 17.825 - 17.920: 99.2432% ( 2) 00:13:23.442 17.920 - 18.015: 99.2585% ( 2) 00:13:23.442 18.015 - 18.110: 99.2815% ( 3) 00:13:23.442 18.489 - 18.584: 99.2891% ( 1) 00:13:23.442 18.584 - 18.679: 99.2967% ( 1) 00:13:23.442 19.816 - 19.911: 99.3044% ( 1) 00:13:23.442 20.575 - 20.670: 99.3120% ( 1) 00:13:23.442 25.031 - 25.221: 99.3197% ( 1) 00:13:23.442 26.927 - 27.117: 99.3273% ( 1) 00:13:23.442 3980.705 - 4004.978: 99.7172% ( 51) 00:13:23.442 4004.978 - 4029.250: 100.0000% ( 37) 00:13:23.442 00:13:23.442 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:23.442 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:23.442 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:23.442 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:23.442 19:42:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:23.699 [ 00:13:23.699 { 00:13:23.699 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:23.699 "subtype": "Discovery", 00:13:23.699 "listen_addresses": [], 00:13:23.699 "allow_any_host": true, 00:13:23.699 "hosts": [] 00:13:23.699 }, 00:13:23.699 { 00:13:23.699 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:23.699 "subtype": "NVMe", 00:13:23.699 "listen_addresses": [ 00:13:23.699 { 00:13:23.699 "trtype": "VFIOUSER", 00:13:23.699 "adrfam": "IPv4", 00:13:23.699 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:23.699 "trsvcid": "0" 00:13:23.699 } 00:13:23.699 ], 00:13:23.699 "allow_any_host": true, 00:13:23.699 "hosts": [], 00:13:23.699 "serial_number": "SPDK1", 00:13:23.699 "model_number": "SPDK bdev Controller", 00:13:23.699 "max_namespaces": 32, 00:13:23.699 "min_cntlid": 1, 00:13:23.699 "max_cntlid": 65519, 00:13:23.699 "namespaces": [ 00:13:23.699 { 00:13:23.699 "nsid": 1, 00:13:23.699 "bdev_name": "Malloc1", 00:13:23.699 "name": "Malloc1", 00:13:23.699 "nguid": "51B66193708446D19EA48CEB8B535664", 00:13:23.699 "uuid": "51b66193-7084-46d1-9ea4-8ceb8b535664" 00:13:23.699 } 00:13:23.699 ] 00:13:23.699 }, 00:13:23.699 { 00:13:23.699 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:23.699 "subtype": "NVMe", 00:13:23.699 "listen_addresses": [ 00:13:23.699 { 00:13:23.699 "trtype": "VFIOUSER", 00:13:23.699 "adrfam": "IPv4", 00:13:23.699 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:23.699 "trsvcid": "0" 00:13:23.699 } 00:13:23.699 ], 00:13:23.699 "allow_any_host": true, 00:13:23.699 "hosts": [], 00:13:23.699 "serial_number": "SPDK2", 00:13:23.699 "model_number": "SPDK bdev Controller", 00:13:23.699 "max_namespaces": 32, 00:13:23.699 "min_cntlid": 1, 00:13:23.699 "max_cntlid": 65519, 00:13:23.699 "namespaces": [ 00:13:23.699 { 00:13:23.699 "nsid": 1, 00:13:23.699 "bdev_name": "Malloc2", 00:13:23.699 "name": "Malloc2", 00:13:23.699 "nguid": "8072FF81EC0F4293924463E34175BC39", 00:13:23.699 "uuid": "8072ff81-ec0f-4293-9244-63e34175bc39" 00:13:23.699 } 00:13:23.699 ] 00:13:23.699 } 00:13:23.699 ] 00:13:23.699 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:23.699 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1155520 00:13:23.699 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:23.699 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:23.699 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # local i=0 00:13:23.699 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:23.699 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:23.699 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1277 -- # return 0 00:13:23.699 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:23.699 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:23.699 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.956 [2024-07-24 19:42:41.168754] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:23.956 Malloc3 00:13:23.956 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:24.214 [2024-07-24 19:42:41.514387] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:24.214 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:24.214 Asynchronous Event Request test 00:13:24.214 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:24.214 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:24.214 Registering asynchronous event callbacks... 00:13:24.214 Starting namespace attribute notice tests for all controllers... 00:13:24.214 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:24.214 aer_cb - Changed Namespace 00:13:24.214 Cleaning up... 00:13:24.472 [ 00:13:24.472 { 00:13:24.472 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:24.472 "subtype": "Discovery", 00:13:24.472 "listen_addresses": [], 00:13:24.472 "allow_any_host": true, 00:13:24.472 "hosts": [] 00:13:24.472 }, 00:13:24.472 { 00:13:24.472 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:24.472 "subtype": "NVMe", 00:13:24.472 "listen_addresses": [ 00:13:24.472 { 00:13:24.472 "trtype": "VFIOUSER", 00:13:24.472 "adrfam": "IPv4", 00:13:24.472 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:24.472 "trsvcid": "0" 00:13:24.472 } 00:13:24.472 ], 00:13:24.472 "allow_any_host": true, 00:13:24.472 "hosts": [], 00:13:24.472 "serial_number": "SPDK1", 00:13:24.472 "model_number": "SPDK bdev Controller", 00:13:24.472 "max_namespaces": 32, 00:13:24.472 "min_cntlid": 1, 00:13:24.472 "max_cntlid": 65519, 00:13:24.472 "namespaces": [ 00:13:24.472 { 00:13:24.472 "nsid": 1, 00:13:24.472 "bdev_name": "Malloc1", 00:13:24.472 "name": "Malloc1", 00:13:24.472 "nguid": "51B66193708446D19EA48CEB8B535664", 00:13:24.472 "uuid": "51b66193-7084-46d1-9ea4-8ceb8b535664" 00:13:24.472 }, 00:13:24.472 { 00:13:24.472 "nsid": 2, 00:13:24.472 "bdev_name": "Malloc3", 00:13:24.472 "name": "Malloc3", 00:13:24.472 "nguid": "E2CDD9BED2D14CF282B85E958CF02297", 00:13:24.472 "uuid": "e2cdd9be-d2d1-4cf2-82b8-5e958cf02297" 00:13:24.472 } 00:13:24.472 ] 00:13:24.472 }, 00:13:24.472 { 00:13:24.472 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:24.472 "subtype": "NVMe", 00:13:24.472 "listen_addresses": [ 00:13:24.472 { 00:13:24.472 "trtype": "VFIOUSER", 00:13:24.472 "adrfam": "IPv4", 00:13:24.472 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:24.472 "trsvcid": "0" 00:13:24.472 } 00:13:24.472 ], 00:13:24.472 "allow_any_host": true, 00:13:24.472 "hosts": [], 00:13:24.472 "serial_number": "SPDK2", 00:13:24.472 "model_number": "SPDK bdev Controller", 00:13:24.472 "max_namespaces": 32, 00:13:24.472 "min_cntlid": 1, 00:13:24.472 "max_cntlid": 65519, 00:13:24.472 "namespaces": [ 00:13:24.472 { 00:13:24.472 "nsid": 1, 00:13:24.472 "bdev_name": "Malloc2", 00:13:24.472 "name": "Malloc2", 00:13:24.472 "nguid": "8072FF81EC0F4293924463E34175BC39", 00:13:24.472 "uuid": "8072ff81-ec0f-4293-9244-63e34175bc39" 00:13:24.472 } 00:13:24.472 ] 00:13:24.472 } 00:13:24.472 ] 00:13:24.472 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1155520 00:13:24.473 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:24.473 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:24.473 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:24.473 19:42:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:24.473 [2024-07-24 19:42:41.789194] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:13:24.473 [2024-07-24 19:42:41.789259] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1155531 ] 00:13:24.473 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.473 [2024-07-24 19:42:41.822391] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:24.473 [2024-07-24 19:42:41.832568] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:24.473 [2024-07-24 19:42:41.832598] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f61e514f000 00:13:24.473 [2024-07-24 19:42:41.833559] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:24.473 [2024-07-24 19:42:41.834560] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:24.473 [2024-07-24 19:42:41.835569] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:24.473 [2024-07-24 19:42:41.836563] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:24.473 [2024-07-24 19:42:41.837560] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:24.473 [2024-07-24 19:42:41.838574] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:24.473 [2024-07-24 19:42:41.839592] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:24.473 [2024-07-24 19:42:41.840605] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:24.473 [2024-07-24 19:42:41.841611] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:24.473 [2024-07-24 19:42:41.841632] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f61e5144000 00:13:24.473 [2024-07-24 19:42:41.842745] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:24.733 [2024-07-24 19:42:41.858123] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:24.733 [2024-07-24 19:42:41.858158] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:24.733 [2024-07-24 19:42:41.863273] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:24.733 [2024-07-24 19:42:41.863338] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:24.733 [2024-07-24 19:42:41.863431] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:24.733 [2024-07-24 19:42:41.863454] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:24.733 [2024-07-24 19:42:41.863469] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:24.733 [2024-07-24 19:42:41.864286] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:24.733 [2024-07-24 19:42:41.864314] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:24.733 [2024-07-24 19:42:41.864329] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:24.733 [2024-07-24 19:42:41.865292] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:24.733 [2024-07-24 19:42:41.865313] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:24.733 [2024-07-24 19:42:41.865327] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:24.733 [2024-07-24 19:42:41.866303] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:24.733 [2024-07-24 19:42:41.866324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:24.733 [2024-07-24 19:42:41.867318] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:24.733 [2024-07-24 19:42:41.867340] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:24.733 [2024-07-24 19:42:41.867349] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:24.733 [2024-07-24 19:42:41.867362] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:24.733 [2024-07-24 19:42:41.867471] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:24.733 [2024-07-24 19:42:41.867479] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:24.733 [2024-07-24 19:42:41.867488] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:24.733 [2024-07-24 19:42:41.868325] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:24.733 [2024-07-24 19:42:41.869332] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:24.733 [2024-07-24 19:42:41.870342] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:24.733 [2024-07-24 19:42:41.871337] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:24.733 [2024-07-24 19:42:41.871406] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:24.733 [2024-07-24 19:42:41.872355] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:24.733 [2024-07-24 19:42:41.872376] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:24.733 [2024-07-24 19:42:41.872385] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:24.733 [2024-07-24 19:42:41.872414] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:24.733 [2024-07-24 19:42:41.872428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:24.733 [2024-07-24 19:42:41.872448] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:24.733 [2024-07-24 19:42:41.872458] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:24.733 [2024-07-24 19:42:41.872464] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:24.733 [2024-07-24 19:42:41.872481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:24.733 [2024-07-24 19:42:41.879255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:24.733 [2024-07-24 19:42:41.879288] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:24.733 [2024-07-24 19:42:41.879297] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:24.733 [2024-07-24 19:42:41.879306] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:24.733 [2024-07-24 19:42:41.879313] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:24.733 [2024-07-24 19:42:41.879321] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:24.733 [2024-07-24 19:42:41.879331] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:24.733 [2024-07-24 19:42:41.879339] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:24.733 [2024-07-24 19:42:41.879352] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:24.733 [2024-07-24 19:42:41.879372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:24.733 [2024-07-24 19:42:41.887256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:24.733 [2024-07-24 19:42:41.887300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:24.733 [2024-07-24 19:42:41.887315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:24.733 [2024-07-24 19:42:41.887327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:24.733 [2024-07-24 19:42:41.887340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:24.733 [2024-07-24 19:42:41.887349] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:24.733 [2024-07-24 19:42:41.887365] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:24.733 [2024-07-24 19:42:41.887380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:24.733 [2024-07-24 19:42:41.895255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:24.733 [2024-07-24 19:42:41.895274] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:24.733 [2024-07-24 19:42:41.895287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:24.733 [2024-07-24 19:42:41.895303] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:24.733 [2024-07-24 19:42:41.895314] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:24.733 [2024-07-24 19:42:41.895327] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:24.734 [2024-07-24 19:42:41.903251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:24.734 [2024-07-24 19:42:41.903325] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:24.734 [2024-07-24 19:42:41.903343] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:24.734 [2024-07-24 19:42:41.903356] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:24.734 [2024-07-24 19:42:41.903365] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:24.734 [2024-07-24 19:42:41.903371] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:24.734 [2024-07-24 19:42:41.903381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:24.734 [2024-07-24 19:42:41.911268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:24.734 [2024-07-24 19:42:41.911291] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:24.734 [2024-07-24 19:42:41.911306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:24.734 [2024-07-24 19:42:41.911321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:24.734 [2024-07-24 19:42:41.911333] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:24.734 [2024-07-24 19:42:41.911342] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:24.734 [2024-07-24 19:42:41.911348] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:24.734 [2024-07-24 19:42:41.911357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:24.734 [2024-07-24 19:42:41.919254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:24.734 [2024-07-24 19:42:41.919283] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:24.734 [2024-07-24 19:42:41.919300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:24.734 [2024-07-24 19:42:41.919314] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:24.734 [2024-07-24 19:42:41.919322] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:24.734 [2024-07-24 19:42:41.919328] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:24.734 [2024-07-24 19:42:41.919338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:24.734 [2024-07-24 19:42:41.927266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:24.734 [2024-07-24 19:42:41.927299] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:24.734 [2024-07-24 19:42:41.927313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:24.734 [2024-07-24 19:42:41.927330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:24.734 [2024-07-24 19:42:41.927343] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:24.734 [2024-07-24 19:42:41.927352] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:24.734 [2024-07-24 19:42:41.927361] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:24.734 [2024-07-24 19:42:41.927369] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:24.734 [2024-07-24 19:42:41.927377] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:24.734 [2024-07-24 19:42:41.927385] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:24.734 [2024-07-24 19:42:41.927410] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:24.734 [2024-07-24 19:42:41.935254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:24.734 [2024-07-24 19:42:41.935282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:24.734 [2024-07-24 19:42:41.941259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:24.734 [2024-07-24 19:42:41.941290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:24.734 [2024-07-24 19:42:41.950255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:24.734 [2024-07-24 19:42:41.950292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:24.734 [2024-07-24 19:42:41.958271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:24.734 [2024-07-24 19:42:41.958303] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:24.734 [2024-07-24 19:42:41.958314] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:24.734 [2024-07-24 19:42:41.958321] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:24.734 [2024-07-24 19:42:41.958327] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:24.734 [2024-07-24 19:42:41.958332] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:24.734 [2024-07-24 19:42:41.958342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:24.734 [2024-07-24 19:42:41.958354] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:24.734 [2024-07-24 19:42:41.958362] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:24.734 [2024-07-24 19:42:41.958368] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:24.734 [2024-07-24 19:42:41.958380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:24.734 [2024-07-24 19:42:41.958392] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:24.734 [2024-07-24 19:42:41.958400] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:24.734 [2024-07-24 19:42:41.958405] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:24.734 [2024-07-24 19:42:41.958414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:24.734 [2024-07-24 19:42:41.958426] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:24.734 [2024-07-24 19:42:41.958434] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:24.734 [2024-07-24 19:42:41.958439] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:24.734 [2024-07-24 19:42:41.958448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:24.734 [2024-07-24 19:42:41.966259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:24.734 [2024-07-24 19:42:41.966292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:24.734 [2024-07-24 19:42:41.966309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:24.734 [2024-07-24 19:42:41.966321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:24.734 ===================================================== 00:13:24.734 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:24.734 ===================================================== 00:13:24.734 Controller Capabilities/Features 00:13:24.734 ================================ 00:13:24.734 Vendor ID: 4e58 00:13:24.734 Subsystem Vendor ID: 4e58 00:13:24.734 Serial Number: SPDK2 00:13:24.734 Model Number: SPDK bdev Controller 00:13:24.734 Firmware Version: 24.09 00:13:24.734 Recommended Arb Burst: 6 00:13:24.734 IEEE OUI Identifier: 8d 6b 50 00:13:24.734 Multi-path I/O 00:13:24.734 May have multiple subsystem ports: Yes 00:13:24.734 May have multiple controllers: Yes 00:13:24.734 Associated with SR-IOV VF: No 00:13:24.734 Max Data Transfer Size: 131072 00:13:24.734 Max Number of Namespaces: 32 00:13:24.734 Max Number of I/O Queues: 127 00:13:24.734 NVMe Specification Version (VS): 1.3 00:13:24.734 NVMe Specification Version (Identify): 1.3 00:13:24.734 Maximum Queue Entries: 256 00:13:24.734 Contiguous Queues Required: Yes 00:13:24.734 Arbitration Mechanisms Supported 00:13:24.734 Weighted Round Robin: Not Supported 00:13:24.734 Vendor Specific: Not Supported 00:13:24.734 Reset Timeout: 15000 ms 00:13:24.734 Doorbell Stride: 4 bytes 00:13:24.734 NVM Subsystem Reset: Not Supported 00:13:24.734 Command Sets Supported 00:13:24.734 NVM Command Set: Supported 00:13:24.734 Boot Partition: Not Supported 00:13:24.734 Memory Page Size Minimum: 4096 bytes 00:13:24.734 Memory Page Size Maximum: 4096 bytes 00:13:24.734 Persistent Memory Region: Not Supported 00:13:24.734 Optional Asynchronous Events Supported 00:13:24.734 Namespace Attribute Notices: Supported 00:13:24.734 Firmware Activation Notices: Not Supported 00:13:24.734 ANA Change Notices: Not Supported 00:13:24.734 PLE Aggregate Log Change Notices: Not Supported 00:13:24.734 LBA Status Info Alert Notices: Not Supported 00:13:24.734 EGE Aggregate Log Change Notices: Not Supported 00:13:24.734 Normal NVM Subsystem Shutdown event: Not Supported 00:13:24.734 Zone Descriptor Change Notices: Not Supported 00:13:24.734 Discovery Log Change Notices: Not Supported 00:13:24.734 Controller Attributes 00:13:24.734 128-bit Host Identifier: Supported 00:13:24.734 Non-Operational Permissive Mode: Not Supported 00:13:24.735 NVM Sets: Not Supported 00:13:24.735 Read Recovery Levels: Not Supported 00:13:24.735 Endurance Groups: Not Supported 00:13:24.735 Predictable Latency Mode: Not Supported 00:13:24.735 Traffic Based Keep ALive: Not Supported 00:13:24.735 Namespace Granularity: Not Supported 00:13:24.735 SQ Associations: Not Supported 00:13:24.735 UUID List: Not Supported 00:13:24.735 Multi-Domain Subsystem: Not Supported 00:13:24.735 Fixed Capacity Management: Not Supported 00:13:24.735 Variable Capacity Management: Not Supported 00:13:24.735 Delete Endurance Group: Not Supported 00:13:24.735 Delete NVM Set: Not Supported 00:13:24.735 Extended LBA Formats Supported: Not Supported 00:13:24.735 Flexible Data Placement Supported: Not Supported 00:13:24.735 00:13:24.735 Controller Memory Buffer Support 00:13:24.735 ================================ 00:13:24.735 Supported: No 00:13:24.735 00:13:24.735 Persistent Memory Region Support 00:13:24.735 ================================ 00:13:24.735 Supported: No 00:13:24.735 00:13:24.735 Admin Command Set Attributes 00:13:24.735 ============================ 00:13:24.735 Security Send/Receive: Not Supported 00:13:24.735 Format NVM: Not Supported 00:13:24.735 Firmware Activate/Download: Not Supported 00:13:24.735 Namespace Management: Not Supported 00:13:24.735 Device Self-Test: Not Supported 00:13:24.735 Directives: Not Supported 00:13:24.735 NVMe-MI: Not Supported 00:13:24.735 Virtualization Management: Not Supported 00:13:24.735 Doorbell Buffer Config: Not Supported 00:13:24.735 Get LBA Status Capability: Not Supported 00:13:24.735 Command & Feature Lockdown Capability: Not Supported 00:13:24.735 Abort Command Limit: 4 00:13:24.735 Async Event Request Limit: 4 00:13:24.735 Number of Firmware Slots: N/A 00:13:24.735 Firmware Slot 1 Read-Only: N/A 00:13:24.735 Firmware Activation Without Reset: N/A 00:13:24.735 Multiple Update Detection Support: N/A 00:13:24.735 Firmware Update Granularity: No Information Provided 00:13:24.735 Per-Namespace SMART Log: No 00:13:24.735 Asymmetric Namespace Access Log Page: Not Supported 00:13:24.735 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:24.735 Command Effects Log Page: Supported 00:13:24.735 Get Log Page Extended Data: Supported 00:13:24.735 Telemetry Log Pages: Not Supported 00:13:24.735 Persistent Event Log Pages: Not Supported 00:13:24.735 Supported Log Pages Log Page: May Support 00:13:24.735 Commands Supported & Effects Log Page: Not Supported 00:13:24.735 Feature Identifiers & Effects Log Page:May Support 00:13:24.735 NVMe-MI Commands & Effects Log Page: May Support 00:13:24.735 Data Area 4 for Telemetry Log: Not Supported 00:13:24.735 Error Log Page Entries Supported: 128 00:13:24.735 Keep Alive: Supported 00:13:24.735 Keep Alive Granularity: 10000 ms 00:13:24.735 00:13:24.735 NVM Command Set Attributes 00:13:24.735 ========================== 00:13:24.735 Submission Queue Entry Size 00:13:24.735 Max: 64 00:13:24.735 Min: 64 00:13:24.735 Completion Queue Entry Size 00:13:24.735 Max: 16 00:13:24.735 Min: 16 00:13:24.735 Number of Namespaces: 32 00:13:24.735 Compare Command: Supported 00:13:24.735 Write Uncorrectable Command: Not Supported 00:13:24.735 Dataset Management Command: Supported 00:13:24.735 Write Zeroes Command: Supported 00:13:24.735 Set Features Save Field: Not Supported 00:13:24.735 Reservations: Not Supported 00:13:24.735 Timestamp: Not Supported 00:13:24.735 Copy: Supported 00:13:24.735 Volatile Write Cache: Present 00:13:24.735 Atomic Write Unit (Normal): 1 00:13:24.735 Atomic Write Unit (PFail): 1 00:13:24.735 Atomic Compare & Write Unit: 1 00:13:24.735 Fused Compare & Write: Supported 00:13:24.735 Scatter-Gather List 00:13:24.735 SGL Command Set: Supported (Dword aligned) 00:13:24.735 SGL Keyed: Not Supported 00:13:24.735 SGL Bit Bucket Descriptor: Not Supported 00:13:24.735 SGL Metadata Pointer: Not Supported 00:13:24.735 Oversized SGL: Not Supported 00:13:24.735 SGL Metadata Address: Not Supported 00:13:24.735 SGL Offset: Not Supported 00:13:24.735 Transport SGL Data Block: Not Supported 00:13:24.735 Replay Protected Memory Block: Not Supported 00:13:24.735 00:13:24.735 Firmware Slot Information 00:13:24.735 ========================= 00:13:24.735 Active slot: 1 00:13:24.735 Slot 1 Firmware Revision: 24.09 00:13:24.735 00:13:24.735 00:13:24.735 Commands Supported and Effects 00:13:24.735 ============================== 00:13:24.735 Admin Commands 00:13:24.735 -------------- 00:13:24.735 Get Log Page (02h): Supported 00:13:24.735 Identify (06h): Supported 00:13:24.735 Abort (08h): Supported 00:13:24.735 Set Features (09h): Supported 00:13:24.735 Get Features (0Ah): Supported 00:13:24.735 Asynchronous Event Request (0Ch): Supported 00:13:24.735 Keep Alive (18h): Supported 00:13:24.735 I/O Commands 00:13:24.735 ------------ 00:13:24.735 Flush (00h): Supported LBA-Change 00:13:24.735 Write (01h): Supported LBA-Change 00:13:24.735 Read (02h): Supported 00:13:24.735 Compare (05h): Supported 00:13:24.735 Write Zeroes (08h): Supported LBA-Change 00:13:24.735 Dataset Management (09h): Supported LBA-Change 00:13:24.735 Copy (19h): Supported LBA-Change 00:13:24.735 00:13:24.735 Error Log 00:13:24.735 ========= 00:13:24.735 00:13:24.735 Arbitration 00:13:24.735 =========== 00:13:24.735 Arbitration Burst: 1 00:13:24.735 00:13:24.735 Power Management 00:13:24.735 ================ 00:13:24.735 Number of Power States: 1 00:13:24.735 Current Power State: Power State #0 00:13:24.735 Power State #0: 00:13:24.735 Max Power: 0.00 W 00:13:24.735 Non-Operational State: Operational 00:13:24.735 Entry Latency: Not Reported 00:13:24.735 Exit Latency: Not Reported 00:13:24.735 Relative Read Throughput: 0 00:13:24.735 Relative Read Latency: 0 00:13:24.735 Relative Write Throughput: 0 00:13:24.735 Relative Write Latency: 0 00:13:24.735 Idle Power: Not Reported 00:13:24.735 Active Power: Not Reported 00:13:24.735 Non-Operational Permissive Mode: Not Supported 00:13:24.735 00:13:24.735 Health Information 00:13:24.735 ================== 00:13:24.735 Critical Warnings: 00:13:24.735 Available Spare Space: OK 00:13:24.735 Temperature: OK 00:13:24.735 Device Reliability: OK 00:13:24.735 Read Only: No 00:13:24.735 Volatile Memory Backup: OK 00:13:24.735 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:24.735 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:24.735 Available Spare: 0% 00:13:24.735 Available Sp[2024-07-24 19:42:41.966446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:24.735 [2024-07-24 19:42:41.974272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:24.735 [2024-07-24 19:42:41.974330] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:24.735 [2024-07-24 19:42:41.974347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.735 [2024-07-24 19:42:41.974358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.735 [2024-07-24 19:42:41.974368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.735 [2024-07-24 19:42:41.974377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.735 [2024-07-24 19:42:41.974460] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:24.735 [2024-07-24 19:42:41.974481] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:24.735 [2024-07-24 19:42:41.975459] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:24.735 [2024-07-24 19:42:41.975544] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:24.735 [2024-07-24 19:42:41.975574] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:24.735 [2024-07-24 19:42:41.976467] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:24.735 [2024-07-24 19:42:41.976492] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:24.735 [2024-07-24 19:42:41.976562] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:24.735 [2024-07-24 19:42:41.979270] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:24.735 are Threshold: 0% 00:13:24.735 Life Percentage Used: 0% 00:13:24.735 Data Units Read: 0 00:13:24.735 Data Units Written: 0 00:13:24.735 Host Read Commands: 0 00:13:24.735 Host Write Commands: 0 00:13:24.735 Controller Busy Time: 0 minutes 00:13:24.735 Power Cycles: 0 00:13:24.735 Power On Hours: 0 hours 00:13:24.735 Unsafe Shutdowns: 0 00:13:24.735 Unrecoverable Media Errors: 0 00:13:24.736 Lifetime Error Log Entries: 0 00:13:24.736 Warning Temperature Time: 0 minutes 00:13:24.736 Critical Temperature Time: 0 minutes 00:13:24.736 00:13:24.736 Number of Queues 00:13:24.736 ================ 00:13:24.736 Number of I/O Submission Queues: 127 00:13:24.736 Number of I/O Completion Queues: 127 00:13:24.736 00:13:24.736 Active Namespaces 00:13:24.736 ================= 00:13:24.736 Namespace ID:1 00:13:24.736 Error Recovery Timeout: Unlimited 00:13:24.736 Command Set Identifier: NVM (00h) 00:13:24.736 Deallocate: Supported 00:13:24.736 Deallocated/Unwritten Error: Not Supported 00:13:24.736 Deallocated Read Value: Unknown 00:13:24.736 Deallocate in Write Zeroes: Not Supported 00:13:24.736 Deallocated Guard Field: 0xFFFF 00:13:24.736 Flush: Supported 00:13:24.736 Reservation: Supported 00:13:24.736 Namespace Sharing Capabilities: Multiple Controllers 00:13:24.736 Size (in LBAs): 131072 (0GiB) 00:13:24.736 Capacity (in LBAs): 131072 (0GiB) 00:13:24.736 Utilization (in LBAs): 131072 (0GiB) 00:13:24.736 NGUID: 8072FF81EC0F4293924463E34175BC39 00:13:24.736 UUID: 8072ff81-ec0f-4293-9244-63e34175bc39 00:13:24.736 Thin Provisioning: Not Supported 00:13:24.736 Per-NS Atomic Units: Yes 00:13:24.736 Atomic Boundary Size (Normal): 0 00:13:24.736 Atomic Boundary Size (PFail): 0 00:13:24.736 Atomic Boundary Offset: 0 00:13:24.736 Maximum Single Source Range Length: 65535 00:13:24.736 Maximum Copy Length: 65535 00:13:24.736 Maximum Source Range Count: 1 00:13:24.736 NGUID/EUI64 Never Reused: No 00:13:24.736 Namespace Write Protected: No 00:13:24.736 Number of LBA Formats: 1 00:13:24.736 Current LBA Format: LBA Format #00 00:13:24.736 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:24.736 00:13:24.736 19:42:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:24.736 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.994 [2024-07-24 19:42:42.207048] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:30.262 Initializing NVMe Controllers 00:13:30.262 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:30.262 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:30.262 Initialization complete. Launching workers. 00:13:30.262 ======================================================== 00:13:30.262 Latency(us) 00:13:30.262 Device Information : IOPS MiB/s Average min max 00:13:30.262 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34542.94 134.93 3705.64 1163.01 9808.46 00:13:30.262 ======================================================== 00:13:30.262 Total : 34542.94 134.93 3705.64 1163.01 9808.46 00:13:30.262 00:13:30.262 [2024-07-24 19:42:47.317591] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:30.262 19:42:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:30.262 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.262 [2024-07-24 19:42:47.563322] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:35.530 Initializing NVMe Controllers 00:13:35.530 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:35.530 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:35.530 Initialization complete. Launching workers. 00:13:35.530 ======================================================== 00:13:35.530 Latency(us) 00:13:35.530 Device Information : IOPS MiB/s Average min max 00:13:35.530 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32033.19 125.13 3995.43 1227.88 8988.68 00:13:35.530 ======================================================== 00:13:35.530 Total : 32033.19 125.13 3995.43 1227.88 8988.68 00:13:35.530 00:13:35.530 [2024-07-24 19:42:52.586945] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:35.530 19:42:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:35.530 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.530 [2024-07-24 19:42:52.803823] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:40.805 [2024-07-24 19:42:57.933397] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:40.806 Initializing NVMe Controllers 00:13:40.806 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:40.806 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:40.806 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:40.806 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:40.806 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:40.806 Initialization complete. Launching workers. 00:13:40.806 Starting thread on core 2 00:13:40.806 Starting thread on core 3 00:13:40.806 Starting thread on core 1 00:13:40.806 19:42:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:40.806 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.065 [2024-07-24 19:42:58.240721] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:44.357 [2024-07-24 19:43:01.301977] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:44.357 Initializing NVMe Controllers 00:13:44.357 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:44.357 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:44.357 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:44.357 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:44.357 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:44.357 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:44.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:44.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:44.357 Initialization complete. Launching workers. 00:13:44.357 Starting thread on core 1 with urgent priority queue 00:13:44.357 Starting thread on core 2 with urgent priority queue 00:13:44.357 Starting thread on core 3 with urgent priority queue 00:13:44.357 Starting thread on core 0 with urgent priority queue 00:13:44.357 SPDK bdev Controller (SPDK2 ) core 0: 3171.00 IO/s 31.54 secs/100000 ios 00:13:44.357 SPDK bdev Controller (SPDK2 ) core 1: 2666.33 IO/s 37.50 secs/100000 ios 00:13:44.357 SPDK bdev Controller (SPDK2 ) core 2: 2879.67 IO/s 34.73 secs/100000 ios 00:13:44.357 SPDK bdev Controller (SPDK2 ) core 3: 3105.00 IO/s 32.21 secs/100000 ios 00:13:44.357 ======================================================== 00:13:44.357 00:13:44.357 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:44.357 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.357 [2024-07-24 19:43:01.601694] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:44.357 Initializing NVMe Controllers 00:13:44.357 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:44.357 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:44.357 Namespace ID: 1 size: 0GB 00:13:44.357 Initialization complete. 00:13:44.357 INFO: using host memory buffer for IO 00:13:44.357 Hello world! 00:13:44.357 [2024-07-24 19:43:01.614799] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:44.357 19:43:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:44.357 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.616 [2024-07-24 19:43:01.913518] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:45.994 Initializing NVMe Controllers 00:13:45.994 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:45.994 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:45.994 Initialization complete. Launching workers. 00:13:45.994 submit (in ns) avg, min, max = 6675.3, 3475.6, 4016568.9 00:13:45.994 complete (in ns) avg, min, max = 27910.1, 2045.6, 4045214.4 00:13:45.994 00:13:45.994 Submit histogram 00:13:45.994 ================ 00:13:45.994 Range in us Cumulative Count 00:13:45.994 3.461 - 3.484: 0.0230% ( 3) 00:13:45.994 3.484 - 3.508: 0.5593% ( 70) 00:13:45.994 3.508 - 3.532: 1.5554% ( 130) 00:13:45.994 3.532 - 3.556: 4.5667% ( 393) 00:13:45.994 3.556 - 3.579: 9.9303% ( 700) 00:13:45.994 3.579 - 3.603: 17.2784% ( 959) 00:13:45.994 3.603 - 3.627: 26.1436% ( 1157) 00:13:45.994 3.627 - 3.650: 36.2424% ( 1318) 00:13:45.994 3.650 - 3.674: 43.9966% ( 1012) 00:13:45.994 3.674 - 3.698: 50.8237% ( 891) 00:13:45.994 3.698 - 3.721: 56.6164% ( 756) 00:13:45.994 3.721 - 3.745: 61.5508% ( 644) 00:13:45.994 3.745 - 3.769: 66.2325% ( 611) 00:13:45.994 3.769 - 3.793: 69.9487% ( 485) 00:13:45.994 3.793 - 3.816: 73.3277% ( 441) 00:13:45.994 3.816 - 3.840: 76.6148% ( 429) 00:13:45.994 3.840 - 3.864: 80.3004% ( 481) 00:13:45.994 3.864 - 3.887: 83.2120% ( 380) 00:13:45.994 3.887 - 3.911: 85.8785% ( 348) 00:13:45.994 3.911 - 3.935: 87.8170% ( 253) 00:13:45.994 3.935 - 3.959: 89.3571% ( 201) 00:13:45.994 3.959 - 3.982: 91.0811% ( 225) 00:13:45.994 3.982 - 4.006: 92.2688% ( 155) 00:13:45.994 4.006 - 4.030: 93.4181% ( 150) 00:13:45.994 4.030 - 4.053: 94.2610% ( 110) 00:13:45.994 4.053 - 4.077: 94.9812% ( 94) 00:13:45.994 4.077 - 4.101: 95.6019% ( 81) 00:13:45.994 4.101 - 4.124: 95.9620% ( 47) 00:13:45.994 4.124 - 4.148: 96.2148% ( 33) 00:13:45.994 4.148 - 4.172: 96.4217% ( 27) 00:13:45.994 4.172 - 4.196: 96.6133% ( 25) 00:13:45.994 4.196 - 4.219: 96.7206% ( 14) 00:13:45.994 4.219 - 4.243: 96.7742% ( 7) 00:13:45.994 4.243 - 4.267: 96.9045% ( 17) 00:13:45.994 4.267 - 4.290: 97.0117% ( 14) 00:13:45.994 4.290 - 4.314: 97.1190% ( 14) 00:13:45.994 4.314 - 4.338: 97.1420% ( 3) 00:13:45.994 4.338 - 4.361: 97.1803% ( 5) 00:13:45.994 4.361 - 4.385: 97.2109% ( 4) 00:13:45.994 4.385 - 4.409: 97.2186% ( 1) 00:13:45.994 4.409 - 4.433: 97.2569% ( 5) 00:13:45.994 4.433 - 4.456: 97.2876% ( 4) 00:13:45.994 4.456 - 4.480: 97.3259% ( 5) 00:13:45.994 4.480 - 4.504: 97.3335% ( 1) 00:13:45.994 4.504 - 4.527: 97.3489% ( 2) 00:13:45.994 4.551 - 4.575: 97.3642% ( 2) 00:13:45.994 4.575 - 4.599: 97.3718% ( 1) 00:13:45.994 4.670 - 4.693: 97.3872% ( 2) 00:13:45.994 4.717 - 4.741: 97.3948% ( 1) 00:13:45.994 4.764 - 4.788: 97.4025% ( 1) 00:13:45.994 4.788 - 4.812: 97.4408% ( 5) 00:13:45.994 4.812 - 4.836: 97.4638% ( 3) 00:13:45.994 4.836 - 4.859: 97.4868% ( 3) 00:13:45.994 4.859 - 4.883: 97.5098% ( 3) 00:13:45.994 4.883 - 4.907: 97.5711% ( 8) 00:13:45.994 4.907 - 4.930: 97.6324% ( 8) 00:13:45.994 4.930 - 4.954: 97.6937% ( 8) 00:13:45.994 4.954 - 4.978: 97.7473% ( 7) 00:13:45.994 4.978 - 5.001: 97.8009% ( 7) 00:13:45.994 5.001 - 5.025: 97.8392% ( 5) 00:13:45.994 5.025 - 5.049: 97.9082% ( 9) 00:13:45.994 5.049 - 5.073: 97.9848% ( 10) 00:13:45.994 5.073 - 5.096: 98.0078% ( 3) 00:13:45.994 5.096 - 5.120: 98.0538% ( 6) 00:13:45.994 5.120 - 5.144: 98.0691% ( 2) 00:13:45.994 5.144 - 5.167: 98.0921% ( 3) 00:13:45.994 5.167 - 5.191: 98.1304% ( 5) 00:13:45.994 5.191 - 5.215: 98.1534% ( 3) 00:13:45.994 5.215 - 5.239: 98.1840% ( 4) 00:13:45.994 5.262 - 5.286: 98.1917% ( 1) 00:13:45.994 5.286 - 5.310: 98.2224% ( 4) 00:13:45.994 5.428 - 5.452: 98.2300% ( 1) 00:13:45.994 5.476 - 5.499: 98.2377% ( 1) 00:13:45.994 5.689 - 5.713: 98.2530% ( 2) 00:13:45.994 5.713 - 5.736: 98.2683% ( 2) 00:13:45.994 5.831 - 5.855: 98.2760% ( 1) 00:13:45.994 5.950 - 5.973: 98.2837% ( 1) 00:13:45.995 6.210 - 6.258: 98.2913% ( 1) 00:13:45.995 6.305 - 6.353: 98.2990% ( 1) 00:13:45.995 6.400 - 6.447: 98.3066% ( 1) 00:13:45.995 6.495 - 6.542: 98.3143% ( 1) 00:13:45.995 6.590 - 6.637: 98.3220% ( 1) 00:13:45.995 6.684 - 6.732: 98.3296% ( 1) 00:13:45.995 6.732 - 6.779: 98.3373% ( 1) 00:13:45.995 6.779 - 6.827: 98.3450% ( 1) 00:13:45.995 7.016 - 7.064: 98.3603% ( 2) 00:13:45.995 7.253 - 7.301: 98.3756% ( 2) 00:13:45.995 7.443 - 7.490: 98.3833% ( 1) 00:13:45.995 7.490 - 7.538: 98.3909% ( 1) 00:13:45.995 7.538 - 7.585: 98.3986% ( 1) 00:13:45.995 7.633 - 7.680: 98.4063% ( 1) 00:13:45.995 7.680 - 7.727: 98.4139% ( 1) 00:13:45.995 7.870 - 7.917: 98.4292% ( 2) 00:13:45.995 7.917 - 7.964: 98.4446% ( 2) 00:13:45.995 7.964 - 8.012: 98.4522% ( 1) 00:13:45.995 8.012 - 8.059: 98.4599% ( 1) 00:13:45.995 8.107 - 8.154: 98.4676% ( 1) 00:13:45.995 8.154 - 8.201: 98.4752% ( 1) 00:13:45.995 8.201 - 8.249: 98.4829% ( 1) 00:13:45.995 8.249 - 8.296: 98.5059% ( 3) 00:13:45.995 8.296 - 8.344: 98.5442% ( 5) 00:13:45.995 8.391 - 8.439: 98.5595% ( 2) 00:13:45.995 8.439 - 8.486: 98.5825% ( 3) 00:13:45.995 8.533 - 8.581: 98.5901% ( 1) 00:13:45.995 8.581 - 8.628: 98.6055% ( 2) 00:13:45.995 8.628 - 8.676: 98.6361% ( 4) 00:13:45.995 8.723 - 8.770: 98.6514% ( 2) 00:13:45.995 8.913 - 8.960: 98.6668% ( 2) 00:13:45.995 9.055 - 9.102: 98.6744% ( 1) 00:13:45.995 9.102 - 9.150: 98.6821% ( 1) 00:13:45.995 9.150 - 9.197: 98.6898% ( 1) 00:13:45.995 9.292 - 9.339: 98.6974% ( 1) 00:13:45.995 9.624 - 9.671: 98.7051% ( 1) 00:13:45.995 9.671 - 9.719: 98.7127% ( 1) 00:13:45.995 9.813 - 9.861: 98.7204% ( 1) 00:13:45.995 9.908 - 9.956: 98.7281% ( 1) 00:13:45.995 9.956 - 10.003: 98.7357% ( 1) 00:13:45.995 10.003 - 10.050: 98.7511% ( 2) 00:13:45.995 10.335 - 10.382: 98.7587% ( 1) 00:13:45.995 10.477 - 10.524: 98.7664% ( 1) 00:13:45.995 10.524 - 10.572: 98.7740% ( 1) 00:13:45.995 10.619 - 10.667: 98.7894% ( 2) 00:13:45.995 10.667 - 10.714: 98.8047% ( 2) 00:13:45.995 10.714 - 10.761: 98.8124% ( 1) 00:13:45.995 10.761 - 10.809: 98.8200% ( 1) 00:13:45.995 10.999 - 11.046: 98.8277% ( 1) 00:13:45.995 11.093 - 11.141: 98.8353% ( 1) 00:13:45.995 11.188 - 11.236: 98.8430% ( 1) 00:13:45.995 11.378 - 11.425: 98.8507% ( 1) 00:13:45.995 11.473 - 11.520: 98.8583% ( 1) 00:13:45.995 11.520 - 11.567: 98.8660% ( 1) 00:13:45.995 11.567 - 11.615: 98.8736% ( 1) 00:13:45.995 11.899 - 11.947: 98.8813% ( 1) 00:13:45.995 12.089 - 12.136: 98.8890% ( 1) 00:13:45.995 12.136 - 12.231: 98.8966% ( 1) 00:13:45.995 12.231 - 12.326: 98.9043% ( 1) 00:13:45.995 12.326 - 12.421: 98.9196% ( 2) 00:13:45.995 12.516 - 12.610: 98.9273% ( 1) 00:13:45.995 12.610 - 12.705: 98.9426% ( 2) 00:13:45.995 12.705 - 12.800: 98.9503% ( 1) 00:13:45.995 13.084 - 13.179: 98.9579% ( 1) 00:13:45.995 13.274 - 13.369: 98.9656% ( 1) 00:13:45.995 13.369 - 13.464: 98.9733% ( 1) 00:13:45.995 13.464 - 13.559: 98.9809% ( 1) 00:13:45.995 13.559 - 13.653: 98.9886% ( 1) 00:13:45.995 13.653 - 13.748: 98.9962% ( 1) 00:13:45.995 13.748 - 13.843: 99.0039% ( 1) 00:13:45.995 14.033 - 14.127: 99.0116% ( 1) 00:13:45.995 14.127 - 14.222: 99.0192% ( 1) 00:13:45.995 14.222 - 14.317: 99.0269% ( 1) 00:13:45.995 14.317 - 14.412: 99.0346% ( 1) 00:13:45.995 14.412 - 14.507: 99.0422% ( 1) 00:13:45.995 14.696 - 14.791: 99.0499% ( 1) 00:13:45.995 15.076 - 15.170: 99.0652% ( 2) 00:13:45.995 17.256 - 17.351: 99.0729% ( 1) 00:13:45.995 17.351 - 17.446: 99.0959% ( 3) 00:13:45.995 17.446 - 17.541: 99.1188% ( 3) 00:13:45.995 17.541 - 17.636: 99.1495% ( 4) 00:13:45.995 17.636 - 17.730: 99.2491% ( 13) 00:13:45.995 17.730 - 17.825: 99.2721% ( 3) 00:13:45.995 17.825 - 17.920: 99.2874% ( 2) 00:13:45.995 17.920 - 18.015: 99.3104% ( 3) 00:13:45.995 18.015 - 18.110: 99.3947% ( 11) 00:13:45.995 18.110 - 18.204: 99.4636% ( 9) 00:13:45.995 18.204 - 18.299: 99.5556% ( 12) 00:13:45.995 18.299 - 18.394: 99.6322% ( 10) 00:13:45.995 18.394 - 18.489: 99.6782% ( 6) 00:13:45.995 18.489 - 18.584: 99.7242% ( 6) 00:13:45.995 18.584 - 18.679: 99.7625% ( 5) 00:13:45.995 18.679 - 18.773: 99.7931% ( 4) 00:13:45.995 18.773 - 18.868: 99.8084% ( 2) 00:13:45.995 18.868 - 18.963: 99.8161% ( 1) 00:13:45.995 18.963 - 19.058: 99.8468% ( 4) 00:13:45.995 19.058 - 19.153: 99.8544% ( 1) 00:13:45.995 19.153 - 19.247: 99.8621% ( 1) 00:13:45.995 19.532 - 19.627: 99.8697% ( 1) 00:13:45.995 19.627 - 19.721: 99.8774% ( 1) 00:13:45.995 20.196 - 20.290: 99.8851% ( 1) 00:13:45.995 20.385 - 20.480: 99.8927% ( 1) 00:13:45.995 22.661 - 22.756: 99.9004% ( 1) 00:13:45.995 22.945 - 23.040: 99.9157% ( 2) 00:13:45.995 23.704 - 23.799: 99.9234% ( 1) 00:13:45.995 24.462 - 24.652: 99.9310% ( 1) 00:13:45.995 3980.705 - 4004.978: 99.9923% ( 8) 00:13:45.995 4004.978 - 4029.250: 100.0000% ( 1) 00:13:45.995 00:13:45.995 Complete histogram 00:13:45.995 ================== 00:13:45.995 Range in us Cumulative Count 00:13:45.995 2.039 - 2.050: 0.7126% ( 93) 00:13:45.995 2.050 - 2.062: 24.8640% ( 3152) 00:13:45.995 2.062 - 2.074: 43.8434% ( 2477) 00:13:45.995 2.074 - 2.086: 45.7513% ( 249) 00:13:45.995 2.086 - 2.098: 53.9729% ( 1073) 00:13:45.995 2.098 - 2.110: 57.3596% ( 442) 00:13:45.995 2.110 - 2.121: 61.0451% ( 481) 00:13:45.995 2.121 - 2.133: 74.9444% ( 1814) 00:13:45.995 2.133 - 2.145: 79.0591% ( 537) 00:13:45.995 2.145 - 2.157: 80.8980% ( 240) 00:13:45.995 2.157 - 2.169: 85.2578% ( 569) 00:13:45.995 2.169 - 2.181: 86.8056% ( 202) 00:13:45.995 2.181 - 2.193: 87.9166% ( 145) 00:13:45.995 2.193 - 2.204: 90.6291% ( 354) 00:13:45.995 2.204 - 2.216: 93.0044% ( 310) 00:13:45.995 2.216 - 2.228: 93.7553% ( 98) 00:13:45.995 2.228 - 2.240: 94.4985% ( 97) 00:13:45.995 2.240 - 2.252: 94.8740% ( 49) 00:13:45.995 2.252 - 2.264: 95.0578% ( 24) 00:13:45.995 2.264 - 2.276: 95.2494% ( 25) 00:13:45.995 2.276 - 2.287: 95.8164% ( 74) 00:13:45.995 2.287 - 2.299: 95.9390% ( 16) 00:13:45.995 2.299 - 2.311: 95.9697% ( 4) 00:13:45.995 2.311 - 2.323: 96.0310% ( 8) 00:13:45.995 2.323 - 2.335: 96.0769% ( 6) 00:13:45.995 2.335 - 2.347: 96.1842% ( 14) 00:13:45.995 2.347 - 2.359: 96.4677% ( 37) 00:13:45.995 2.359 - 2.370: 96.7972% ( 43) 00:13:45.995 2.370 - 2.382: 97.1343% ( 44) 00:13:45.995 2.382 - 2.394: 97.4715% ( 44) 00:13:45.995 2.394 - 2.406: 97.7013% ( 30) 00:13:45.995 2.406 - 2.418: 97.8469% ( 19) 00:13:45.995 2.418 - 2.430: 98.0078% ( 21) 00:13:45.995 2.430 - 2.441: 98.1381% ( 17) 00:13:45.995 2.441 - 2.453: 98.2070% ( 9) 00:13:45.995 2.453 - 2.465: 98.2453% ( 5) 00:13:45.995 2.465 - 2.477: 98.2913% ( 6) 00:13:45.995 2.477 - 2.489: 98.3220% ( 4) 00:13:45.995 2.489 - 2.501: 98.3373% ( 2) 00:13:45.995 2.501 - 2.513: 98.3679% ( 4) 00:13:45.995 2.513 - 2.524: 98.3756% ( 1) 00:13:45.995 2.524 - 2.536: 98.3986% ( 3) 00:13:45.995 2.536 - 2.548: 98.4139% ( 2) 00:13:45.995 2.548 - 2.560: 98.4292% ( 2) 00:13:45.995 2.560 - 2.572: 98.4446% ( 2) 00:13:45.995 2.607 - 2.619: 98.4522% ( 1) 00:13:45.995 2.702 - 2.714: 98.4599% ( 1) 00:13:45.995 2.761 - 2.773: 98.4676% ( 1) 00:13:45.995 2.833 - 2.844: 98.4752% ( 1) 00:13:45.995 2.951 - 2.963: 98.4829% ( 1) 00:13:45.995 3.366 - 3.390: 98.4905% ( 1) 00:13:45.995 3.413 - 3.437: 98.4982% ( 1) 00:13:45.995 3.437 - 3.461: 98.5059% ( 1) 00:13:45.995 3.461 - 3.484: 98.5135% ( 1) 00:13:45.995 3.484 - 3.508: 98.5212% ( 1) 00:13:45.995 3.508 - 3.532: 98.5288% ( 1) 00:13:45.995 3.579 - 3.603: 98.5365% ( 1) 00:13:45.995 3.603 - 3.627: 98.5442% ( 1) 00:13:45.995 3.627 - 3.650: 98.5518% ( 1) 00:13:45.995 3.650 - 3.674: 98.5672% ( 2) 00:13:45.995 3.674 - 3.698: 98.5748% ( 1) 00:13:45.995 3.698 - 3.721: 98.5901% ( 2) 00:13:45.995 3.745 - 3.769: 98.6131% ( 3) 00:13:45.995 3.769 - 3.793: 98.6208% ( 1) 00:13:45.995 3.864 - 3.887: 98.6285% ( 1) 00:13:45.995 3.887 - 3.911: 98.6361% ( 1) 00:13:45.995 3.935 - 3.959: 9[2024-07-24 19:43:03.015022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:45.995 8.6514% ( 2) 00:13:45.995 4.077 - 4.101: 98.6591% ( 1) 00:13:45.995 4.243 - 4.267: 98.6668% ( 1) 00:13:45.996 4.338 - 4.361: 98.6744% ( 1) 00:13:45.996 5.428 - 5.452: 98.6821% ( 1) 00:13:45.996 5.713 - 5.736: 98.6898% ( 1) 00:13:45.996 5.831 - 5.855: 98.6974% ( 1) 00:13:45.996 5.950 - 5.973: 98.7051% ( 1) 00:13:45.996 6.116 - 6.163: 98.7127% ( 1) 00:13:45.996 6.210 - 6.258: 98.7204% ( 1) 00:13:45.996 6.779 - 6.827: 98.7357% ( 2) 00:13:45.996 6.874 - 6.921: 98.7434% ( 1) 00:13:45.996 6.921 - 6.969: 98.7511% ( 1) 00:13:45.996 7.633 - 7.680: 98.7587% ( 1) 00:13:45.996 7.917 - 7.964: 98.7664% ( 1) 00:13:45.996 7.964 - 8.012: 98.7740% ( 1) 00:13:45.996 8.059 - 8.107: 98.7894% ( 2) 00:13:45.996 8.201 - 8.249: 98.7970% ( 1) 00:13:45.996 8.391 - 8.439: 98.8047% ( 1) 00:13:45.996 10.714 - 10.761: 98.8124% ( 1) 00:13:45.996 13.084 - 13.179: 98.8200% ( 1) 00:13:45.996 15.644 - 15.739: 98.8277% ( 1) 00:13:45.996 15.834 - 15.929: 98.8353% ( 1) 00:13:45.996 15.929 - 16.024: 98.8507% ( 2) 00:13:45.996 16.024 - 16.119: 98.8813% ( 4) 00:13:45.996 16.119 - 16.213: 98.9273% ( 6) 00:13:45.996 16.213 - 16.308: 98.9733% ( 6) 00:13:45.996 16.308 - 16.403: 98.9886% ( 2) 00:13:45.996 16.403 - 16.498: 99.0192% ( 4) 00:13:45.996 16.498 - 16.593: 99.0729% ( 7) 00:13:45.996 16.593 - 16.687: 99.1342% ( 8) 00:13:45.996 16.687 - 16.782: 99.1801% ( 6) 00:13:45.996 16.782 - 16.877: 99.1955% ( 2) 00:13:45.996 16.877 - 16.972: 99.2261% ( 4) 00:13:45.996 16.972 - 17.067: 99.2568% ( 4) 00:13:45.996 17.067 - 17.161: 99.2951% ( 5) 00:13:45.996 17.161 - 17.256: 99.3027% ( 1) 00:13:45.996 17.256 - 17.351: 99.3104% ( 1) 00:13:45.996 17.541 - 17.636: 99.3257% ( 2) 00:13:45.996 17.730 - 17.825: 99.3334% ( 1) 00:13:45.996 17.920 - 18.015: 99.3410% ( 1) 00:13:45.996 18.015 - 18.110: 99.3487% ( 1) 00:13:45.996 18.679 - 18.773: 99.3564% ( 1) 00:13:45.996 3398.163 - 3422.436: 99.3640% ( 1) 00:13:45.996 3980.705 - 4004.978: 99.8621% ( 65) 00:13:45.996 4004.978 - 4029.250: 99.9923% ( 17) 00:13:45.996 4029.250 - 4053.523: 100.0000% ( 1) 00:13:45.996 00:13:45.996 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:45.996 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:45.996 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:45.996 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:45.996 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:45.996 [ 00:13:45.996 { 00:13:45.996 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:45.996 "subtype": "Discovery", 00:13:45.996 "listen_addresses": [], 00:13:45.996 "allow_any_host": true, 00:13:45.996 "hosts": [] 00:13:45.996 }, 00:13:45.996 { 00:13:45.996 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:45.996 "subtype": "NVMe", 00:13:45.996 "listen_addresses": [ 00:13:45.996 { 00:13:45.996 "trtype": "VFIOUSER", 00:13:45.996 "adrfam": "IPv4", 00:13:45.996 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:45.996 "trsvcid": "0" 00:13:45.996 } 00:13:45.996 ], 00:13:45.996 "allow_any_host": true, 00:13:45.996 "hosts": [], 00:13:45.996 "serial_number": "SPDK1", 00:13:45.996 "model_number": "SPDK bdev Controller", 00:13:45.996 "max_namespaces": 32, 00:13:45.996 "min_cntlid": 1, 00:13:45.996 "max_cntlid": 65519, 00:13:45.996 "namespaces": [ 00:13:45.996 { 00:13:45.996 "nsid": 1, 00:13:45.996 "bdev_name": "Malloc1", 00:13:45.996 "name": "Malloc1", 00:13:45.996 "nguid": "51B66193708446D19EA48CEB8B535664", 00:13:45.996 "uuid": "51b66193-7084-46d1-9ea4-8ceb8b535664" 00:13:45.996 }, 00:13:45.996 { 00:13:45.996 "nsid": 2, 00:13:45.996 "bdev_name": "Malloc3", 00:13:45.996 "name": "Malloc3", 00:13:45.996 "nguid": "E2CDD9BED2D14CF282B85E958CF02297", 00:13:45.996 "uuid": "e2cdd9be-d2d1-4cf2-82b8-5e958cf02297" 00:13:45.996 } 00:13:45.996 ] 00:13:45.996 }, 00:13:45.996 { 00:13:45.996 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:45.996 "subtype": "NVMe", 00:13:45.996 "listen_addresses": [ 00:13:45.996 { 00:13:45.996 "trtype": "VFIOUSER", 00:13:45.996 "adrfam": "IPv4", 00:13:45.996 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:45.996 "trsvcid": "0" 00:13:45.996 } 00:13:45.996 ], 00:13:45.996 "allow_any_host": true, 00:13:45.996 "hosts": [], 00:13:45.996 "serial_number": "SPDK2", 00:13:45.996 "model_number": "SPDK bdev Controller", 00:13:45.996 "max_namespaces": 32, 00:13:45.996 "min_cntlid": 1, 00:13:45.996 "max_cntlid": 65519, 00:13:45.996 "namespaces": [ 00:13:45.996 { 00:13:45.996 "nsid": 1, 00:13:45.996 "bdev_name": "Malloc2", 00:13:45.996 "name": "Malloc2", 00:13:45.996 "nguid": "8072FF81EC0F4293924463E34175BC39", 00:13:45.996 "uuid": "8072ff81-ec0f-4293-9244-63e34175bc39" 00:13:45.996 } 00:13:45.996 ] 00:13:45.996 } 00:13:45.996 ] 00:13:45.996 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:45.996 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1158056 00:13:45.996 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:45.996 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:45.996 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # local i=0 00:13:45.996 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:45.996 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:45.996 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1277 -- # return 0 00:13:45.996 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:45.996 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:46.254 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.254 [2024-07-24 19:43:03.485203] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:46.254 Malloc4 00:13:46.254 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:46.512 [2024-07-24 19:43:03.863119] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:46.512 19:43:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:46.770 Asynchronous Event Request test 00:13:46.770 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:46.770 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:46.770 Registering asynchronous event callbacks... 00:13:46.770 Starting namespace attribute notice tests for all controllers... 00:13:46.770 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:46.770 aer_cb - Changed Namespace 00:13:46.770 Cleaning up... 00:13:46.770 [ 00:13:46.770 { 00:13:46.770 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:46.770 "subtype": "Discovery", 00:13:46.770 "listen_addresses": [], 00:13:46.770 "allow_any_host": true, 00:13:46.770 "hosts": [] 00:13:46.770 }, 00:13:46.770 { 00:13:46.770 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:46.770 "subtype": "NVMe", 00:13:46.770 "listen_addresses": [ 00:13:46.770 { 00:13:46.770 "trtype": "VFIOUSER", 00:13:46.770 "adrfam": "IPv4", 00:13:46.770 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:46.770 "trsvcid": "0" 00:13:46.770 } 00:13:46.770 ], 00:13:46.770 "allow_any_host": true, 00:13:46.770 "hosts": [], 00:13:46.770 "serial_number": "SPDK1", 00:13:46.770 "model_number": "SPDK bdev Controller", 00:13:46.770 "max_namespaces": 32, 00:13:46.770 "min_cntlid": 1, 00:13:46.770 "max_cntlid": 65519, 00:13:46.770 "namespaces": [ 00:13:46.770 { 00:13:46.770 "nsid": 1, 00:13:46.770 "bdev_name": "Malloc1", 00:13:46.770 "name": "Malloc1", 00:13:46.770 "nguid": "51B66193708446D19EA48CEB8B535664", 00:13:46.770 "uuid": "51b66193-7084-46d1-9ea4-8ceb8b535664" 00:13:46.770 }, 00:13:46.770 { 00:13:46.770 "nsid": 2, 00:13:46.770 "bdev_name": "Malloc3", 00:13:46.770 "name": "Malloc3", 00:13:46.770 "nguid": "E2CDD9BED2D14CF282B85E958CF02297", 00:13:46.770 "uuid": "e2cdd9be-d2d1-4cf2-82b8-5e958cf02297" 00:13:46.770 } 00:13:46.770 ] 00:13:46.770 }, 00:13:46.770 { 00:13:46.770 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:46.770 "subtype": "NVMe", 00:13:46.770 "listen_addresses": [ 00:13:46.770 { 00:13:46.770 "trtype": "VFIOUSER", 00:13:46.770 "adrfam": "IPv4", 00:13:46.770 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:46.770 "trsvcid": "0" 00:13:46.770 } 00:13:46.770 ], 00:13:46.770 "allow_any_host": true, 00:13:46.770 "hosts": [], 00:13:46.770 "serial_number": "SPDK2", 00:13:46.770 "model_number": "SPDK bdev Controller", 00:13:46.770 "max_namespaces": 32, 00:13:46.770 "min_cntlid": 1, 00:13:46.770 "max_cntlid": 65519, 00:13:46.770 "namespaces": [ 00:13:46.770 { 00:13:46.770 "nsid": 1, 00:13:46.770 "bdev_name": "Malloc2", 00:13:46.770 "name": "Malloc2", 00:13:46.770 "nguid": "8072FF81EC0F4293924463E34175BC39", 00:13:46.770 "uuid": "8072ff81-ec0f-4293-9244-63e34175bc39" 00:13:46.770 }, 00:13:46.770 { 00:13:46.770 "nsid": 2, 00:13:46.770 "bdev_name": "Malloc4", 00:13:46.770 "name": "Malloc4", 00:13:46.770 "nguid": "891CCDAB594749AA87D77B3CE2C7003B", 00:13:46.770 "uuid": "891ccdab-5947-49aa-87d7-7b3ce2c7003b" 00:13:46.770 } 00:13:46.770 ] 00:13:46.770 } 00:13:46.770 ] 00:13:46.770 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1158056 00:13:46.770 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:46.770 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1152579 00:13:46.770 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' -z 1152579 ']' 00:13:46.770 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # kill -0 1152579 00:13:46.770 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # uname 00:13:46.770 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:13:46.770 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1152579 00:13:46.770 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:13:46.770 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:13:46.770 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1152579' 00:13:46.770 killing process with pid 1152579 00:13:46.770 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # kill 1152579 00:13:46.770 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@975 -- # wait 1152579 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1158212 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1158212' 00:13:47.336 Process pid: 1158212 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1158212 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@832 -- # '[' -z 1158212 ']' 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local max_retries=100 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@841 -- # xtrace_disable 00:13:47.336 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:47.336 [2024-07-24 19:43:04.591342] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:47.336 [2024-07-24 19:43:04.592401] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:13:47.336 [2024-07-24 19:43:04.592460] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.336 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.336 [2024-07-24 19:43:04.656934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:47.594 [2024-07-24 19:43:04.770133] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.594 [2024-07-24 19:43:04.770187] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.594 [2024-07-24 19:43:04.770215] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.594 [2024-07-24 19:43:04.770236] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.594 [2024-07-24 19:43:04.770253] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.594 [2024-07-24 19:43:04.770323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:47.594 [2024-07-24 19:43:04.770385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.594 [2024-07-24 19:43:04.770655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.594 [2024-07-24 19:43:04.770658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.594 [2024-07-24 19:43:04.874765] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:47.594 [2024-07-24 19:43:04.874994] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:47.594 [2024-07-24 19:43:04.875332] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:47.594 [2024-07-24 19:43:04.875993] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:47.594 [2024-07-24 19:43:04.876254] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:47.594 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:13:47.594 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@865 -- # return 0 00:13:47.594 19:43:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:48.529 19:43:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:49.136 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:49.136 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:49.136 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:49.136 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:49.136 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:49.136 Malloc1 00:13:49.136 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:49.418 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:49.677 19:43:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:49.935 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:49.935 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:49.935 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:50.192 Malloc2 00:13:50.192 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:50.450 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:50.707 19:43:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:50.965 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:50.965 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1158212 00:13:50.965 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' -z 1158212 ']' 00:13:50.965 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # kill -0 1158212 00:13:50.965 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # uname 00:13:50.965 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:13:50.965 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1158212 00:13:50.965 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:13:50.965 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:13:50.965 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1158212' 00:13:50.965 killing process with pid 1158212 00:13:50.965 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # kill 1158212 00:13:50.965 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@975 -- # wait 1158212 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:51.531 00:13:51.531 real 0m52.551s 00:13:51.531 user 3m22.444s 00:13:51.531 sys 0m3.852s 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # xtrace_disable 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:51.531 ************************************ 00:13:51.531 END TEST nvmf_vfio_user 00:13:51.531 ************************************ 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:51.531 ************************************ 00:13:51.531 START TEST nvmf_vfio_user_nvme_compliance 00:13:51.531 ************************************ 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:51.531 * Looking for test storage... 00:13:51.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.531 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:51.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1158793 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1158793' 00:13:51.532 Process pid: 1158793 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1158793 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # '[' -z 1158793 ']' 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local max_retries=100 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@841 -- # xtrace_disable 00:13:51.532 19:43:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:51.532 [2024-07-24 19:43:08.780968] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:13:51.532 [2024-07-24 19:43:08.781048] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.532 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.532 [2024-07-24 19:43:08.843558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:51.789 [2024-07-24 19:43:08.960169] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.789 [2024-07-24 19:43:08.960225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.789 [2024-07-24 19:43:08.960261] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.789 [2024-07-24 19:43:08.960281] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.789 [2024-07-24 19:43:08.960291] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.789 [2024-07-24 19:43:08.960368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.789 [2024-07-24 19:43:08.960427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.789 [2024-07-24 19:43:08.960430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.353 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:13:52.353 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@865 -- # return 0 00:13:52.353 19:43:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:53.723 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:53.723 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:53.723 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:53.723 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@562 -- # xtrace_disable 00:13:53.723 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:53.723 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:13:53.723 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:53.723 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:53.723 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@562 -- # xtrace_disable 00:13:53.723 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:53.723 malloc0 00:13:53.723 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:13:53.724 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:53.724 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@562 -- # xtrace_disable 00:13:53.724 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:53.724 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:13:53.724 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:53.724 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@562 -- # xtrace_disable 00:13:53.724 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:53.724 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:13:53.724 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:53.724 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@562 -- # xtrace_disable 00:13:53.724 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:53.724 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:13:53.724 19:43:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:53.724 EAL: No free 2048 kB hugepages reported on node 1 00:13:53.724 00:13:53.724 00:13:53.724 CUnit - A unit testing framework for C - Version 2.1-3 00:13:53.724 http://cunit.sourceforge.net/ 00:13:53.724 00:13:53.724 00:13:53.724 Suite: nvme_compliance 00:13:53.724 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 19:43:10.931424] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.724 [2024-07-24 19:43:10.932858] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:53.724 [2024-07-24 19:43:10.932882] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:53.724 [2024-07-24 19:43:10.932909] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:53.724 [2024-07-24 19:43:10.934449] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.724 passed 00:13:53.724 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 19:43:11.022089] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.724 [2024-07-24 19:43:11.025116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.724 passed 00:13:53.981 Test: admin_identify_ns ...[2024-07-24 19:43:11.108712] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.981 [2024-07-24 19:43:11.169259] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:53.981 [2024-07-24 19:43:11.177271] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:53.981 [2024-07-24 19:43:11.198396] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.981 passed 00:13:53.981 Test: admin_get_features_mandatory_features ...[2024-07-24 19:43:11.281973] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:53.981 [2024-07-24 19:43:11.284992] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:53.981 passed 00:13:54.239 Test: admin_get_features_optional_features ...[2024-07-24 19:43:11.367563] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.239 [2024-07-24 19:43:11.370584] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.239 passed 00:13:54.239 Test: admin_set_features_number_of_queues ...[2024-07-24 19:43:11.453713] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.239 [2024-07-24 19:43:11.558363] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.239 passed 00:13:54.496 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 19:43:11.642409] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.496 [2024-07-24 19:43:11.645433] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.496 passed 00:13:54.496 Test: admin_get_log_page_with_lpo ...[2024-07-24 19:43:11.728671] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.496 [2024-07-24 19:43:11.796284] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:54.496 [2024-07-24 19:43:11.809329] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.496 passed 00:13:54.754 Test: fabric_property_get ...[2024-07-24 19:43:11.894591] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.754 [2024-07-24 19:43:11.895860] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:54.754 [2024-07-24 19:43:11.897613] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.754 passed 00:13:54.754 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 19:43:11.981125] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:54.754 [2024-07-24 19:43:11.982492] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:54.754 [2024-07-24 19:43:11.984165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:54.754 passed 00:13:54.754 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 19:43:12.066363] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.011 [2024-07-24 19:43:12.152264] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:55.011 [2024-07-24 19:43:12.168280] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:55.011 [2024-07-24 19:43:12.173352] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.011 passed 00:13:55.011 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 19:43:12.257012] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.011 [2024-07-24 19:43:12.258341] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:55.011 [2024-07-24 19:43:12.260032] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.011 passed 00:13:55.011 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 19:43:12.344238] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.286 [2024-07-24 19:43:12.417269] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:55.286 [2024-07-24 19:43:12.441253] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:55.286 [2024-07-24 19:43:12.446370] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.286 passed 00:13:55.286 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 19:43:12.530398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.286 [2024-07-24 19:43:12.531730] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:55.286 [2024-07-24 19:43:12.531781] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:55.286 [2024-07-24 19:43:12.533414] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.286 passed 00:13:55.286 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 19:43:12.615582] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.544 [2024-07-24 19:43:12.710252] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:55.544 [2024-07-24 19:43:12.717404] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:55.544 [2024-07-24 19:43:12.725252] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:55.544 [2024-07-24 19:43:12.733256] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:55.544 [2024-07-24 19:43:12.762338] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.544 passed 00:13:55.544 Test: admin_create_io_sq_verify_pc ...[2024-07-24 19:43:12.846108] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:55.544 [2024-07-24 19:43:12.861266] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:55.544 [2024-07-24 19:43:12.878528] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:55.544 passed 00:13:55.801 Test: admin_create_io_qp_max_qps ...[2024-07-24 19:43:12.961083] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:56.728 [2024-07-24 19:43:14.073258] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:57.293 [2024-07-24 19:43:14.461993] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:57.293 passed 00:13:57.293 Test: admin_create_io_sq_shared_cq ...[2024-07-24 19:43:14.543861] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:57.550 [2024-07-24 19:43:14.678248] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:57.550 [2024-07-24 19:43:14.715353] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:57.550 passed 00:13:57.550 00:13:57.550 Run Summary: Type Total Ran Passed Failed Inactive 00:13:57.550 suites 1 1 n/a 0 0 00:13:57.550 tests 18 18 18 0 0 00:13:57.550 asserts 360 360 360 0 n/a 00:13:57.550 00:13:57.550 Elapsed time = 1.571 seconds 00:13:57.550 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1158793 00:13:57.550 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' -z 1158793 ']' 00:13:57.550 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # kill -0 1158793 00:13:57.551 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # uname 00:13:57.551 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:13:57.551 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1158793 00:13:57.551 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:13:57.551 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:13:57.551 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1158793' 00:13:57.551 killing process with pid 1158793 00:13:57.551 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # kill 1158793 00:13:57.551 19:43:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@975 -- # wait 1158793 00:13:57.808 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:57.808 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:57.808 00:13:57.808 real 0m6.444s 00:13:57.808 user 0m18.326s 00:13:57.808 sys 0m0.576s 00:13:57.808 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # xtrace_disable 00:13:57.808 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:57.808 ************************************ 00:13:57.808 END TEST nvmf_vfio_user_nvme_compliance 00:13:57.808 ************************************ 00:13:57.808 19:43:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:57.808 19:43:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:13:57.808 19:43:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:13:57.808 19:43:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:57.808 ************************************ 00:13:57.808 START TEST nvmf_vfio_user_fuzz 00:13:57.808 ************************************ 00:13:57.808 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:58.068 * Looking for test storage... 00:13:58.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:58.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1159644 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1159644' 00:13:58.068 Process pid: 1159644 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1159644 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # '[' -z 1159644 ']' 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local max_retries=100 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@841 -- # xtrace_disable 00:13:58.068 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:58.327 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:13:58.327 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@865 -- # return 0 00:13:58.327 19:43:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@562 -- # xtrace_disable 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@562 -- # xtrace_disable 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:59.268 malloc0 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@562 -- # xtrace_disable 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@562 -- # xtrace_disable 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@562 -- # xtrace_disable 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:59.268 19:43:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:31.335 Fuzzing completed. Shutting down the fuzz application 00:14:31.335 00:14:31.335 Dumping successful admin opcodes: 00:14:31.335 8, 9, 10, 24, 00:14:31.335 Dumping successful io opcodes: 00:14:31.335 0, 00:14:31.335 NS: 0x200003a1ef00 I/O qp, Total commands completed: 724157, total successful commands: 2820, random_seed: 1610350464 00:14:31.335 NS: 0x200003a1ef00 admin qp, Total commands completed: 92987, total successful commands: 754, random_seed: 995014144 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1159644 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' -z 1159644 ']' 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # kill -0 1159644 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # uname 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1159644 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1159644' 00:14:31.335 killing process with pid 1159644 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # kill 1159644 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@975 -- # wait 1159644 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:31.335 00:14:31.335 real 0m32.308s 00:14:31.335 user 0m33.988s 00:14:31.335 sys 0m27.136s 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # xtrace_disable 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:31.335 ************************************ 00:14:31.335 END TEST nvmf_vfio_user_fuzz 00:14:31.335 ************************************ 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:31.335 ************************************ 00:14:31.335 START TEST nvmf_auth_target 00:14:31.335 ************************************ 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:31.335 * Looking for test storage... 00:14:31.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.335 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@452 -- # prepare_net_devs 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # local -g is_hw=no 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # remove_spdk_ns 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # xtrace_disable 00:14:31.336 19:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # pci_devs=() 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -a pci_devs 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # pci_net_devs=() 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # pci_drivers=() 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -A pci_drivers 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # net_devs=() 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # local -ga net_devs 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # e810=() 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # local -ga e810 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # x722=() 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # local -ga x722 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # mlx=() 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # local -ga mlx 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.272 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:32.273 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:32.273 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # [[ up == up ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:32.273 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # [[ up == up ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:32.273 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # is_hw=yes 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:14:32.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:14:32.273 00:14:32.273 --- 10.0.0.2 ping statistics --- 00:14:32.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.273 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:32.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:14:32.273 00:14:32.273 --- 10.0.0.1 ping statistics --- 00:14:32.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.273 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # return 0 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@725 -- # xtrace_disable 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # nvmfpid=1164963 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # waitforlisten 1164963 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@832 -- # '[' -z 1164963 ']' 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local max_retries=100 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@841 -- # xtrace_disable 00:14:32.273 19:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@865 -- # return 0 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@731 -- # xtrace_disable 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1165113 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=null 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # len=48 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # key=f175ad2c9df86f17a16fe644f822482cc82646368555b9ee 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-null.XXX 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-null.RVJ 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key f175ad2c9df86f17a16fe644f822482cc82646368555b9ee 0 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 f175ad2c9df86f17a16fe644f822482cc82646368555b9ee 0 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # key=f175ad2c9df86f17a16fe644f822482cc82646368555b9ee 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=0 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-null.RVJ 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-null.RVJ 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.RVJ 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha512 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # len=64 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # key=7f4422e968a74b15207fc7e44424621fd0653573f7d310ea4fa0fdb47a1578ef 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha512.XXX 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha512.SrT 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key 7f4422e968a74b15207fc7e44424621fd0653573f7d310ea4fa0fdb47a1578ef 3 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 7f4422e968a74b15207fc7e44424621fd0653573f7d310ea4fa0fdb47a1578ef 3 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # key=7f4422e968a74b15207fc7e44424621fd0653573f7d310ea4fa0fdb47a1578ef 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=3 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha512.SrT 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha512.SrT 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.SrT 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha256 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # len=32 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # key=fa38896110bc8cb672685dec5c940117 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha256.XXX 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha256.E1v 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key fa38896110bc8cb672685dec5c940117 1 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 fa38896110bc8cb672685dec5c940117 1 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # key=fa38896110bc8cb672685dec5c940117 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=1 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha256.E1v 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha256.E1v 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.E1v 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha384 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # len=48 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # key=1b2e0530ca0198ed93500b79a99ba89fd742e693da878f2c 00:14:33.677 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha384.XXX 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha384.JnG 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key 1b2e0530ca0198ed93500b79a99ba89fd742e693da878f2c 2 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 1b2e0530ca0198ed93500b79a99ba89fd742e693da878f2c 2 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # key=1b2e0530ca0198ed93500b79a99ba89fd742e693da878f2c 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=2 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha384.JnG 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha384.JnG 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.JnG 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha384 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # len=48 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # key=ba57959bb946b5282809de039f78892bc09f738cddce9115 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha384.XXX 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha384.gem 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key ba57959bb946b5282809de039f78892bc09f738cddce9115 2 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 ba57959bb946b5282809de039f78892bc09f738cddce9115 2 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # key=ba57959bb946b5282809de039f78892bc09f738cddce9115 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=2 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha384.gem 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha384.gem 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.gem 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha256 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # len=32 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # key=ad888b4431673f67e9dd255c691f535d 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha256.XXX 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha256.Q6F 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key ad888b4431673f67e9dd255c691f535d 1 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 ad888b4431673f67e9dd255c691f535d 1 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # key=ad888b4431673f67e9dd255c691f535d 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=1 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha256.Q6F 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha256.Q6F 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.Q6F 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # local digest len file key 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local -A digests 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=sha512 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # len=64 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # key=743a1c5ff2392dedf839004bce8386e2f8630bc83c08cf55da8f1a12cbd6c469 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha512.XXX 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha512.ZHZ 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # format_dhchap_key 743a1c5ff2392dedf839004bce8386e2f8630bc83c08cf55da8f1a12cbd6c469 3 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # format_key DHHC-1 743a1c5ff2392dedf839004bce8386e2f8630bc83c08cf55da8f1a12cbd6c469 3 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@706 -- # local prefix key digest 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # key=743a1c5ff2392dedf839004bce8386e2f8630bc83c08cf55da8f1a12cbd6c469 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@708 -- # digest=3 00:14:33.678 19:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@709 -- # python - 00:14:33.678 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha512.ZHZ 00:14:33.678 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha512.ZHZ 00:14:33.678 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.ZHZ 00:14:33.678 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:33.678 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1164963 00:14:33.678 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@832 -- # '[' -z 1164963 ']' 00:14:33.678 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.678 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local max_retries=100 00:14:33.678 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.678 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@841 -- # xtrace_disable 00:14:33.678 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.935 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:14:33.935 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@865 -- # return 0 00:14:33.935 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1165113 /var/tmp/host.sock 00:14:33.935 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@832 -- # '[' -z 1165113 ']' 00:14:33.935 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/host.sock 00:14:33.935 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local max_retries=100 00:14:33.935 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:33.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:33.936 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@841 -- # xtrace_disable 00:14:33.936 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.192 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:14:34.192 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@865 -- # return 0 00:14:34.192 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:34.192 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:34.192 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.192 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:34.193 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:34.193 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.RVJ 00:14:34.193 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:34.193 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.193 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:34.193 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.RVJ 00:14:34.193 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.RVJ 00:14:34.756 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.SrT ]] 00:14:34.756 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SrT 00:14:34.756 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:34.756 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.756 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:34.756 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SrT 00:14:34.756 19:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SrT 00:14:34.756 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:34.756 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.E1v 00:14:34.756 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:34.756 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.756 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:34.756 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.E1v 00:14:34.756 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.E1v 00:14:35.013 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.JnG ]] 00:14:35.013 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JnG 00:14:35.013 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:35.013 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.270 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:35.270 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JnG 00:14:35.270 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.JnG 00:14:35.527 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:35.527 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.gem 00:14:35.527 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:35.527 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.527 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:35.527 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.gem 00:14:35.527 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.gem 00:14:35.785 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.Q6F ]] 00:14:35.785 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Q6F 00:14:35.785 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:35.785 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.785 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:35.785 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Q6F 00:14:35.785 19:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Q6F 00:14:36.042 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:36.042 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ZHZ 00:14:36.042 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:36.042 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.042 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:36.042 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ZHZ 00:14:36.042 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ZHZ 00:14:36.299 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:36.299 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:36.299 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:36.299 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.299 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:36.299 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:36.557 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:36.557 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.557 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.557 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:36.557 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:36.557 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.557 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.557 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:36.557 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.557 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:36.557 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.557 19:43:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.815 00:14:36.815 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.815 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.815 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.073 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.073 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.073 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:37.073 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.073 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:37.073 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.073 { 00:14:37.073 "cntlid": 1, 00:14:37.073 "qid": 0, 00:14:37.073 "state": "enabled", 00:14:37.073 "thread": "nvmf_tgt_poll_group_000", 00:14:37.073 "listen_address": { 00:14:37.073 "trtype": "TCP", 00:14:37.073 "adrfam": "IPv4", 00:14:37.073 "traddr": "10.0.0.2", 00:14:37.073 "trsvcid": "4420" 00:14:37.073 }, 00:14:37.073 "peer_address": { 00:14:37.073 "trtype": "TCP", 00:14:37.073 "adrfam": "IPv4", 00:14:37.073 "traddr": "10.0.0.1", 00:14:37.073 "trsvcid": "53858" 00:14:37.073 }, 00:14:37.073 "auth": { 00:14:37.073 "state": "completed", 00:14:37.073 "digest": "sha256", 00:14:37.073 "dhgroup": "null" 00:14:37.073 } 00:14:37.073 } 00:14:37.073 ]' 00:14:37.073 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.331 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.331 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.331 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:37.331 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.331 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.331 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.331 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.588 19:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:14:38.522 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.522 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.523 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:38.523 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.523 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:38.523 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.523 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:38.523 19:43:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:38.780 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:38.780 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.780 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:38.780 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:38.780 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:38.780 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.780 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.780 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:38.780 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.780 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:38.780 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.780 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.038 00:14:39.038 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.038 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.038 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.295 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.295 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.296 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:39.296 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.296 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:39.296 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.296 { 00:14:39.296 "cntlid": 3, 00:14:39.296 "qid": 0, 00:14:39.296 "state": "enabled", 00:14:39.296 "thread": "nvmf_tgt_poll_group_000", 00:14:39.296 "listen_address": { 00:14:39.296 "trtype": "TCP", 00:14:39.296 "adrfam": "IPv4", 00:14:39.296 "traddr": "10.0.0.2", 00:14:39.296 "trsvcid": "4420" 00:14:39.296 }, 00:14:39.296 "peer_address": { 00:14:39.296 "trtype": "TCP", 00:14:39.296 "adrfam": "IPv4", 00:14:39.296 "traddr": "10.0.0.1", 00:14:39.296 "trsvcid": "53892" 00:14:39.296 }, 00:14:39.296 "auth": { 00:14:39.296 "state": "completed", 00:14:39.296 "digest": "sha256", 00:14:39.296 "dhgroup": "null" 00:14:39.296 } 00:14:39.296 } 00:14:39.296 ]' 00:14:39.296 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.296 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.296 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.296 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:39.296 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.553 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.553 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.553 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.811 19:43:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:14:40.744 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.744 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.744 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:40.744 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.744 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:40.744 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.744 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:40.744 19:43:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:41.001 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:41.001 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.001 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.001 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:41.001 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:41.001 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.001 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.001 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:41.001 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.001 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:41.001 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.001 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.259 00:14:41.259 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.259 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.259 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.517 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.517 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.517 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:41.517 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.517 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:41.517 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.517 { 00:14:41.517 "cntlid": 5, 00:14:41.517 "qid": 0, 00:14:41.517 "state": "enabled", 00:14:41.517 "thread": "nvmf_tgt_poll_group_000", 00:14:41.517 "listen_address": { 00:14:41.517 "trtype": "TCP", 00:14:41.517 "adrfam": "IPv4", 00:14:41.517 "traddr": "10.0.0.2", 00:14:41.517 "trsvcid": "4420" 00:14:41.517 }, 00:14:41.517 "peer_address": { 00:14:41.517 "trtype": "TCP", 00:14:41.517 "adrfam": "IPv4", 00:14:41.517 "traddr": "10.0.0.1", 00:14:41.517 "trsvcid": "53934" 00:14:41.517 }, 00:14:41.517 "auth": { 00:14:41.517 "state": "completed", 00:14:41.517 "digest": "sha256", 00:14:41.517 "dhgroup": "null" 00:14:41.517 } 00:14:41.517 } 00:14:41.517 ]' 00:14:41.517 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.517 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.517 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.775 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:41.775 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.775 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.775 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.775 19:43:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.032 19:43:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:14:42.963 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.963 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.963 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:42.963 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.963 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:42.963 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.963 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:42.963 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:43.220 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:43.220 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.220 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:43.220 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:43.220 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:43.220 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.220 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:43.220 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:43.220 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.220 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:43.220 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:43.220 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:43.478 00:14:43.478 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.478 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.478 19:44:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.735 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.735 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.735 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:43.735 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.735 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:43.735 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.735 { 00:14:43.735 "cntlid": 7, 00:14:43.735 "qid": 0, 00:14:43.735 "state": "enabled", 00:14:43.735 "thread": "nvmf_tgt_poll_group_000", 00:14:43.735 "listen_address": { 00:14:43.735 "trtype": "TCP", 00:14:43.735 "adrfam": "IPv4", 00:14:43.735 "traddr": "10.0.0.2", 00:14:43.735 "trsvcid": "4420" 00:14:43.735 }, 00:14:43.735 "peer_address": { 00:14:43.735 "trtype": "TCP", 00:14:43.735 "adrfam": "IPv4", 00:14:43.735 "traddr": "10.0.0.1", 00:14:43.735 "trsvcid": "60374" 00:14:43.735 }, 00:14:43.735 "auth": { 00:14:43.735 "state": "completed", 00:14:43.735 "digest": "sha256", 00:14:43.735 "dhgroup": "null" 00:14:43.735 } 00:14:43.735 } 00:14:43.735 ]' 00:14:43.735 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.735 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.735 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.992 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:43.992 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.992 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.992 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.992 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.249 19:44:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:14:45.185 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.185 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:45.185 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:45.185 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.185 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:45.185 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.185 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.185 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:45.185 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:45.443 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:45.443 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.443 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:45.443 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:45.443 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:45.443 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.443 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.443 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:45.443 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.443 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:45.443 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.443 19:44:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:45.701 00:14:45.701 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.701 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.701 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.959 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.959 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.959 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:45.959 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.959 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:45.959 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.959 { 00:14:45.959 "cntlid": 9, 00:14:45.959 "qid": 0, 00:14:45.959 "state": "enabled", 00:14:45.959 "thread": "nvmf_tgt_poll_group_000", 00:14:45.959 "listen_address": { 00:14:45.959 "trtype": "TCP", 00:14:45.959 "adrfam": "IPv4", 00:14:45.959 "traddr": "10.0.0.2", 00:14:45.959 "trsvcid": "4420" 00:14:45.959 }, 00:14:45.959 "peer_address": { 00:14:45.959 "trtype": "TCP", 00:14:45.959 "adrfam": "IPv4", 00:14:45.959 "traddr": "10.0.0.1", 00:14:45.959 "trsvcid": "60404" 00:14:45.959 }, 00:14:45.959 "auth": { 00:14:45.959 "state": "completed", 00:14:45.959 "digest": "sha256", 00:14:45.959 "dhgroup": "ffdhe2048" 00:14:45.959 } 00:14:45.959 } 00:14:45.959 ]' 00:14:45.959 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.959 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.959 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.959 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:45.959 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.226 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.226 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.226 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.484 19:44:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:14:47.417 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.417 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.417 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:47.417 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.417 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:47.417 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.417 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:47.417 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:47.675 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:47.675 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.675 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:47.675 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:47.675 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:47.675 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.675 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.675 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:47.675 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.675 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:47.675 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.675 19:44:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.933 00:14:47.933 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.933 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.933 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.191 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.191 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.191 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:48.191 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.191 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:48.191 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.191 { 00:14:48.191 "cntlid": 11, 00:14:48.191 "qid": 0, 00:14:48.191 "state": "enabled", 00:14:48.191 "thread": "nvmf_tgt_poll_group_000", 00:14:48.191 "listen_address": { 00:14:48.191 "trtype": "TCP", 00:14:48.191 "adrfam": "IPv4", 00:14:48.191 "traddr": "10.0.0.2", 00:14:48.191 "trsvcid": "4420" 00:14:48.191 }, 00:14:48.191 "peer_address": { 00:14:48.191 "trtype": "TCP", 00:14:48.191 "adrfam": "IPv4", 00:14:48.191 "traddr": "10.0.0.1", 00:14:48.191 "trsvcid": "60442" 00:14:48.191 }, 00:14:48.191 "auth": { 00:14:48.191 "state": "completed", 00:14:48.191 "digest": "sha256", 00:14:48.191 "dhgroup": "ffdhe2048" 00:14:48.191 } 00:14:48.191 } 00:14:48.191 ]' 00:14:48.191 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.191 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.191 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.456 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:48.456 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.456 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.456 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.456 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.799 19:44:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:14:49.750 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.750 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.750 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:49.750 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.750 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:49.750 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.750 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:49.750 19:44:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:50.008 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:50.008 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.008 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:50.008 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:50.008 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:50.008 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.008 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.008 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:50.008 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.008 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:50.008 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.008 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.267 00:14:50.267 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.267 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.267 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.525 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.525 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.525 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:50.525 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.525 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:50.525 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.525 { 00:14:50.525 "cntlid": 13, 00:14:50.525 "qid": 0, 00:14:50.525 "state": "enabled", 00:14:50.525 "thread": "nvmf_tgt_poll_group_000", 00:14:50.525 "listen_address": { 00:14:50.525 "trtype": "TCP", 00:14:50.525 "adrfam": "IPv4", 00:14:50.525 "traddr": "10.0.0.2", 00:14:50.525 "trsvcid": "4420" 00:14:50.525 }, 00:14:50.525 "peer_address": { 00:14:50.525 "trtype": "TCP", 00:14:50.525 "adrfam": "IPv4", 00:14:50.525 "traddr": "10.0.0.1", 00:14:50.525 "trsvcid": "60476" 00:14:50.525 }, 00:14:50.525 "auth": { 00:14:50.525 "state": "completed", 00:14:50.525 "digest": "sha256", 00:14:50.525 "dhgroup": "ffdhe2048" 00:14:50.525 } 00:14:50.525 } 00:14:50.525 ]' 00:14:50.525 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.525 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.525 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.525 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:50.526 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.526 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.526 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.526 19:44:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.784 19:44:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:52.155 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.156 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.721 00:14:52.721 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.721 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.721 19:44:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.721 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.721 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.721 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:52.721 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.721 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:52.721 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.721 { 00:14:52.721 "cntlid": 15, 00:14:52.721 "qid": 0, 00:14:52.721 "state": "enabled", 00:14:52.721 "thread": "nvmf_tgt_poll_group_000", 00:14:52.721 "listen_address": { 00:14:52.721 "trtype": "TCP", 00:14:52.721 "adrfam": "IPv4", 00:14:52.721 "traddr": "10.0.0.2", 00:14:52.721 "trsvcid": "4420" 00:14:52.721 }, 00:14:52.721 "peer_address": { 00:14:52.721 "trtype": "TCP", 00:14:52.721 "adrfam": "IPv4", 00:14:52.721 "traddr": "10.0.0.1", 00:14:52.721 "trsvcid": "40810" 00:14:52.721 }, 00:14:52.721 "auth": { 00:14:52.721 "state": "completed", 00:14:52.721 "digest": "sha256", 00:14:52.721 "dhgroup": "ffdhe2048" 00:14:52.721 } 00:14:52.721 } 00:14:52.721 ]' 00:14:52.721 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.979 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.979 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.979 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:52.979 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.979 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.979 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.979 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.237 19:44:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:14:54.172 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.172 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.172 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:54.172 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.172 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:54.172 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.172 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.172 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:54.173 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:54.430 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:54.430 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.430 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:54.430 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:54.430 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:54.430 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.430 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.430 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:54.430 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.430 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:54.430 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.430 19:44:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.999 00:14:54.999 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.999 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.999 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.257 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.257 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.257 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:55.257 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.257 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:55.257 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.257 { 00:14:55.257 "cntlid": 17, 00:14:55.257 "qid": 0, 00:14:55.257 "state": "enabled", 00:14:55.257 "thread": "nvmf_tgt_poll_group_000", 00:14:55.257 "listen_address": { 00:14:55.257 "trtype": "TCP", 00:14:55.257 "adrfam": "IPv4", 00:14:55.257 "traddr": "10.0.0.2", 00:14:55.257 "trsvcid": "4420" 00:14:55.257 }, 00:14:55.257 "peer_address": { 00:14:55.257 "trtype": "TCP", 00:14:55.257 "adrfam": "IPv4", 00:14:55.257 "traddr": "10.0.0.1", 00:14:55.257 "trsvcid": "40840" 00:14:55.257 }, 00:14:55.257 "auth": { 00:14:55.257 "state": "completed", 00:14:55.257 "digest": "sha256", 00:14:55.257 "dhgroup": "ffdhe3072" 00:14:55.257 } 00:14:55.257 } 00:14:55.257 ]' 00:14:55.257 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.257 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.257 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.257 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:55.257 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.257 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.257 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.257 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.516 19:44:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:14:56.454 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.454 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.454 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:56.454 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.454 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:56.454 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.454 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:56.454 19:44:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:56.712 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:56.712 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.712 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:56.712 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:56.712 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:56.712 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.712 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.712 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:56.712 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.712 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:56.712 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.712 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.279 00:14:57.279 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.279 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.279 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.279 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.279 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.279 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:57.279 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.279 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:57.279 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.279 { 00:14:57.279 "cntlid": 19, 00:14:57.279 "qid": 0, 00:14:57.279 "state": "enabled", 00:14:57.279 "thread": "nvmf_tgt_poll_group_000", 00:14:57.279 "listen_address": { 00:14:57.279 "trtype": "TCP", 00:14:57.279 "adrfam": "IPv4", 00:14:57.279 "traddr": "10.0.0.2", 00:14:57.279 "trsvcid": "4420" 00:14:57.279 }, 00:14:57.279 "peer_address": { 00:14:57.279 "trtype": "TCP", 00:14:57.279 "adrfam": "IPv4", 00:14:57.279 "traddr": "10.0.0.1", 00:14:57.279 "trsvcid": "40852" 00:14:57.279 }, 00:14:57.279 "auth": { 00:14:57.279 "state": "completed", 00:14:57.279 "digest": "sha256", 00:14:57.279 "dhgroup": "ffdhe3072" 00:14:57.279 } 00:14:57.279 } 00:14:57.279 ]' 00:14:57.279 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.279 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.279 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:57.538 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:57.538 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:57.538 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.538 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.538 19:44:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.796 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:14:58.733 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.733 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.733 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:58.733 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.733 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:58.733 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.733 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:58.733 19:44:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:58.991 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:58.992 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.992 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:58.992 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:58.992 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:58.992 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.992 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.992 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:58.992 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.992 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:58.992 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.992 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.249 00:14:59.249 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.249 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.249 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.508 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.508 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.508 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:14:59.508 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.508 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:14:59.508 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.508 { 00:14:59.508 "cntlid": 21, 00:14:59.508 "qid": 0, 00:14:59.508 "state": "enabled", 00:14:59.508 "thread": "nvmf_tgt_poll_group_000", 00:14:59.508 "listen_address": { 00:14:59.508 "trtype": "TCP", 00:14:59.508 "adrfam": "IPv4", 00:14:59.508 "traddr": "10.0.0.2", 00:14:59.508 "trsvcid": "4420" 00:14:59.508 }, 00:14:59.508 "peer_address": { 00:14:59.508 "trtype": "TCP", 00:14:59.508 "adrfam": "IPv4", 00:14:59.508 "traddr": "10.0.0.1", 00:14:59.508 "trsvcid": "40878" 00:14:59.508 }, 00:14:59.508 "auth": { 00:14:59.508 "state": "completed", 00:14:59.508 "digest": "sha256", 00:14:59.508 "dhgroup": "ffdhe3072" 00:14:59.508 } 00:14:59.508 } 00:14:59.508 ]' 00:14:59.508 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.766 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.766 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.766 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:59.766 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.766 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.766 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.766 19:44:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.025 19:44:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:15:00.963 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.963 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.963 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:00.963 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.963 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:00.963 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.963 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:00.963 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:01.221 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:15:01.221 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.221 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:01.221 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:01.221 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:01.221 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.221 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:01.221 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:01.221 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.221 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:01.221 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:01.221 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:01.479 00:15:01.479 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.479 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.479 19:44:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.737 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.737 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.737 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:01.737 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.737 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:01.737 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.737 { 00:15:01.737 "cntlid": 23, 00:15:01.737 "qid": 0, 00:15:01.737 "state": "enabled", 00:15:01.737 "thread": "nvmf_tgt_poll_group_000", 00:15:01.737 "listen_address": { 00:15:01.737 "trtype": "TCP", 00:15:01.737 "adrfam": "IPv4", 00:15:01.737 "traddr": "10.0.0.2", 00:15:01.737 "trsvcid": "4420" 00:15:01.737 }, 00:15:01.737 "peer_address": { 00:15:01.737 "trtype": "TCP", 00:15:01.737 "adrfam": "IPv4", 00:15:01.737 "traddr": "10.0.0.1", 00:15:01.737 "trsvcid": "40908" 00:15:01.737 }, 00:15:01.737 "auth": { 00:15:01.737 "state": "completed", 00:15:01.737 "digest": "sha256", 00:15:01.737 "dhgroup": "ffdhe3072" 00:15:01.737 } 00:15:01.737 } 00:15:01.737 ]' 00:15:01.737 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.737 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.996 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.996 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:01.996 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.996 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.996 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.996 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.254 19:44:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:15:03.190 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.190 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.190 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:03.190 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.190 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:03.190 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:03.190 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.190 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:03.190 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:03.449 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:15:03.449 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.449 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:03.449 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:03.449 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:03.449 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.449 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.449 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:03.449 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.449 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:03.449 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.449 19:44:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:03.708 00:15:03.967 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.967 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.967 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.967 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.967 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.967 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:03.967 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.225 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:04.225 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.225 { 00:15:04.225 "cntlid": 25, 00:15:04.225 "qid": 0, 00:15:04.225 "state": "enabled", 00:15:04.225 "thread": "nvmf_tgt_poll_group_000", 00:15:04.225 "listen_address": { 00:15:04.225 "trtype": "TCP", 00:15:04.225 "adrfam": "IPv4", 00:15:04.225 "traddr": "10.0.0.2", 00:15:04.225 "trsvcid": "4420" 00:15:04.225 }, 00:15:04.225 "peer_address": { 00:15:04.225 "trtype": "TCP", 00:15:04.225 "adrfam": "IPv4", 00:15:04.225 "traddr": "10.0.0.1", 00:15:04.225 "trsvcid": "52310" 00:15:04.225 }, 00:15:04.226 "auth": { 00:15:04.226 "state": "completed", 00:15:04.226 "digest": "sha256", 00:15:04.226 "dhgroup": "ffdhe4096" 00:15:04.226 } 00:15:04.226 } 00:15:04.226 ]' 00:15:04.226 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.226 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.226 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.226 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:04.226 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.226 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.226 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.226 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.485 19:44:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:15:05.457 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.457 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:05.457 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:05.457 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.457 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:05.457 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.457 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:05.457 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:05.714 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:15:05.714 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.714 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:05.714 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:05.714 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:05.714 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.714 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.714 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:05.714 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.714 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:05.714 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.714 19:44:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:05.971 00:15:05.971 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.971 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.971 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.229 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.229 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.229 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:06.229 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.229 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:06.229 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.229 { 00:15:06.229 "cntlid": 27, 00:15:06.229 "qid": 0, 00:15:06.229 "state": "enabled", 00:15:06.229 "thread": "nvmf_tgt_poll_group_000", 00:15:06.229 "listen_address": { 00:15:06.229 "trtype": "TCP", 00:15:06.229 "adrfam": "IPv4", 00:15:06.229 "traddr": "10.0.0.2", 00:15:06.229 "trsvcid": "4420" 00:15:06.229 }, 00:15:06.229 "peer_address": { 00:15:06.229 "trtype": "TCP", 00:15:06.229 "adrfam": "IPv4", 00:15:06.229 "traddr": "10.0.0.1", 00:15:06.229 "trsvcid": "52346" 00:15:06.229 }, 00:15:06.229 "auth": { 00:15:06.229 "state": "completed", 00:15:06.229 "digest": "sha256", 00:15:06.229 "dhgroup": "ffdhe4096" 00:15:06.229 } 00:15:06.229 } 00:15:06.229 ]' 00:15:06.229 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.487 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.487 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.487 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:06.487 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.487 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.487 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.487 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.745 19:44:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:15:07.683 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.683 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.683 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:07.683 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.683 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:07.683 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.683 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:07.683 19:44:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:07.941 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:15:07.941 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.941 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:07.941 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:07.941 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:07.941 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.941 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.941 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:07.941 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.941 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:07.941 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:07.941 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.199 00:15:08.199 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.199 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.199 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.765 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.765 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.765 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:08.765 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.765 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:08.765 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.765 { 00:15:08.765 "cntlid": 29, 00:15:08.765 "qid": 0, 00:15:08.765 "state": "enabled", 00:15:08.765 "thread": "nvmf_tgt_poll_group_000", 00:15:08.765 "listen_address": { 00:15:08.765 "trtype": "TCP", 00:15:08.765 "adrfam": "IPv4", 00:15:08.765 "traddr": "10.0.0.2", 00:15:08.765 "trsvcid": "4420" 00:15:08.765 }, 00:15:08.765 "peer_address": { 00:15:08.765 "trtype": "TCP", 00:15:08.765 "adrfam": "IPv4", 00:15:08.765 "traddr": "10.0.0.1", 00:15:08.765 "trsvcid": "52364" 00:15:08.765 }, 00:15:08.765 "auth": { 00:15:08.765 "state": "completed", 00:15:08.765 "digest": "sha256", 00:15:08.765 "dhgroup": "ffdhe4096" 00:15:08.765 } 00:15:08.765 } 00:15:08.765 ]' 00:15:08.765 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.765 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.765 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.765 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:08.765 19:44:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.765 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.765 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.765 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.023 19:44:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:15:09.958 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.958 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.958 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:09.958 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.958 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:09.958 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.958 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:09.958 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:10.216 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:15:10.216 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.216 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:10.216 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:10.216 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:10.216 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.216 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:10.216 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:10.216 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.216 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:10.216 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:10.216 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:10.520 00:15:10.779 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.779 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.779 19:44:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.779 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.779 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.779 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:10.779 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.779 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:10.779 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.779 { 00:15:10.779 "cntlid": 31, 00:15:10.779 "qid": 0, 00:15:10.779 "state": "enabled", 00:15:10.779 "thread": "nvmf_tgt_poll_group_000", 00:15:10.779 "listen_address": { 00:15:10.779 "trtype": "TCP", 00:15:10.779 "adrfam": "IPv4", 00:15:10.779 "traddr": "10.0.0.2", 00:15:10.779 "trsvcid": "4420" 00:15:10.779 }, 00:15:10.779 "peer_address": { 00:15:10.779 "trtype": "TCP", 00:15:10.779 "adrfam": "IPv4", 00:15:10.779 "traddr": "10.0.0.1", 00:15:10.779 "trsvcid": "52382" 00:15:10.779 }, 00:15:10.779 "auth": { 00:15:10.779 "state": "completed", 00:15:10.779 "digest": "sha256", 00:15:10.779 "dhgroup": "ffdhe4096" 00:15:10.779 } 00:15:10.779 } 00:15:10.779 ]' 00:15:10.779 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.037 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.037 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.037 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:11.037 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.037 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.037 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.037 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.295 19:44:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:15:12.230 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.230 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:12.230 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:12.230 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.230 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:12.230 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:12.230 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.230 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:12.230 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:12.488 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:15:12.488 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.488 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:12.488 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:12.488 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:12.488 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.488 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.488 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:12.488 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.488 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:12.488 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.488 19:44:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.055 00:15:13.055 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.055 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.055 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.314 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.314 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.314 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:13.314 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.314 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:13.314 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.314 { 00:15:13.314 "cntlid": 33, 00:15:13.314 "qid": 0, 00:15:13.314 "state": "enabled", 00:15:13.314 "thread": "nvmf_tgt_poll_group_000", 00:15:13.314 "listen_address": { 00:15:13.314 "trtype": "TCP", 00:15:13.314 "adrfam": "IPv4", 00:15:13.314 "traddr": "10.0.0.2", 00:15:13.314 "trsvcid": "4420" 00:15:13.314 }, 00:15:13.314 "peer_address": { 00:15:13.314 "trtype": "TCP", 00:15:13.314 "adrfam": "IPv4", 00:15:13.314 "traddr": "10.0.0.1", 00:15:13.314 "trsvcid": "56602" 00:15:13.314 }, 00:15:13.314 "auth": { 00:15:13.314 "state": "completed", 00:15:13.314 "digest": "sha256", 00:15:13.314 "dhgroup": "ffdhe6144" 00:15:13.314 } 00:15:13.314 } 00:15:13.314 ]' 00:15:13.314 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.314 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.314 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.314 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:13.314 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.314 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.314 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.314 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.572 19:44:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:15:14.951 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.951 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.951 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:14.951 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.951 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:14.951 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.951 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:14.951 19:44:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:14.951 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:15:14.951 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.951 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:14.951 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:14.951 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:14.951 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.951 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.951 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:14.951 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.951 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:14.951 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.951 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.519 00:15:15.519 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.519 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.519 19:44:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.778 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.778 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.778 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:15.778 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.778 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:15.778 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.778 { 00:15:15.778 "cntlid": 35, 00:15:15.778 "qid": 0, 00:15:15.778 "state": "enabled", 00:15:15.778 "thread": "nvmf_tgt_poll_group_000", 00:15:15.778 "listen_address": { 00:15:15.778 "trtype": "TCP", 00:15:15.778 "adrfam": "IPv4", 00:15:15.778 "traddr": "10.0.0.2", 00:15:15.778 "trsvcid": "4420" 00:15:15.778 }, 00:15:15.778 "peer_address": { 00:15:15.778 "trtype": "TCP", 00:15:15.778 "adrfam": "IPv4", 00:15:15.778 "traddr": "10.0.0.1", 00:15:15.778 "trsvcid": "56636" 00:15:15.778 }, 00:15:15.778 "auth": { 00:15:15.778 "state": "completed", 00:15:15.778 "digest": "sha256", 00:15:15.778 "dhgroup": "ffdhe6144" 00:15:15.778 } 00:15:15.778 } 00:15:15.778 ]' 00:15:15.778 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.778 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.778 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.778 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:15.778 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.778 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.778 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.778 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.037 19:44:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.410 19:44:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.977 00:15:17.977 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.977 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.977 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.235 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.235 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.235 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:18.235 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.235 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:18.236 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.236 { 00:15:18.236 "cntlid": 37, 00:15:18.236 "qid": 0, 00:15:18.236 "state": "enabled", 00:15:18.236 "thread": "nvmf_tgt_poll_group_000", 00:15:18.236 "listen_address": { 00:15:18.236 "trtype": "TCP", 00:15:18.236 "adrfam": "IPv4", 00:15:18.236 "traddr": "10.0.0.2", 00:15:18.236 "trsvcid": "4420" 00:15:18.236 }, 00:15:18.236 "peer_address": { 00:15:18.236 "trtype": "TCP", 00:15:18.236 "adrfam": "IPv4", 00:15:18.236 "traddr": "10.0.0.1", 00:15:18.236 "trsvcid": "56670" 00:15:18.236 }, 00:15:18.236 "auth": { 00:15:18.236 "state": "completed", 00:15:18.236 "digest": "sha256", 00:15:18.236 "dhgroup": "ffdhe6144" 00:15:18.236 } 00:15:18.236 } 00:15:18.236 ]' 00:15:18.236 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.236 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.236 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.236 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:18.236 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.236 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.236 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.236 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.495 19:44:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:15:19.872 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.872 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.872 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:19.872 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.872 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:19.872 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.872 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:19.872 19:44:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:19.872 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:15:19.872 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.872 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:19.872 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:19.872 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:19.872 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.872 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:19.872 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:19.872 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.872 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:19.872 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:19.872 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.441 00:15:20.441 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.441 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.441 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.699 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.699 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.699 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:20.699 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.699 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:20.699 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.699 { 00:15:20.699 "cntlid": 39, 00:15:20.699 "qid": 0, 00:15:20.699 "state": "enabled", 00:15:20.699 "thread": "nvmf_tgt_poll_group_000", 00:15:20.699 "listen_address": { 00:15:20.699 "trtype": "TCP", 00:15:20.699 "adrfam": "IPv4", 00:15:20.699 "traddr": "10.0.0.2", 00:15:20.699 "trsvcid": "4420" 00:15:20.699 }, 00:15:20.699 "peer_address": { 00:15:20.699 "trtype": "TCP", 00:15:20.699 "adrfam": "IPv4", 00:15:20.699 "traddr": "10.0.0.1", 00:15:20.699 "trsvcid": "56692" 00:15:20.699 }, 00:15:20.699 "auth": { 00:15:20.699 "state": "completed", 00:15:20.699 "digest": "sha256", 00:15:20.699 "dhgroup": "ffdhe6144" 00:15:20.699 } 00:15:20.699 } 00:15:20.699 ]' 00:15:20.699 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.699 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.699 19:44:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.699 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:20.699 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.699 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.699 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.699 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.959 19:44:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:15:21.924 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.924 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.924 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:21.924 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.182 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:22.182 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:22.182 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.182 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:22.182 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:22.439 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:15:22.439 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.439 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:22.439 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:22.439 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:22.440 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.440 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.440 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:22.440 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.440 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:22.440 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.440 19:44:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.373 00:15:23.373 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.373 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.373 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.373 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.373 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.373 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:23.373 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.373 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:23.373 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:23.373 { 00:15:23.373 "cntlid": 41, 00:15:23.373 "qid": 0, 00:15:23.373 "state": "enabled", 00:15:23.373 "thread": "nvmf_tgt_poll_group_000", 00:15:23.373 "listen_address": { 00:15:23.373 "trtype": "TCP", 00:15:23.373 "adrfam": "IPv4", 00:15:23.373 "traddr": "10.0.0.2", 00:15:23.373 "trsvcid": "4420" 00:15:23.373 }, 00:15:23.373 "peer_address": { 00:15:23.373 "trtype": "TCP", 00:15:23.373 "adrfam": "IPv4", 00:15:23.373 "traddr": "10.0.0.1", 00:15:23.373 "trsvcid": "35638" 00:15:23.373 }, 00:15:23.373 "auth": { 00:15:23.373 "state": "completed", 00:15:23.373 "digest": "sha256", 00:15:23.373 "dhgroup": "ffdhe8192" 00:15:23.373 } 00:15:23.373 } 00:15:23.373 ]' 00:15:23.373 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:23.632 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:23.632 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:23.632 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:23.632 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:23.632 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.632 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.632 19:44:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.890 19:44:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:15:24.827 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.827 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:24.827 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:24.827 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.827 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:24.827 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.827 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:24.827 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:25.085 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:25.085 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.085 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:25.085 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:25.085 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:25.085 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.085 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.085 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:25.085 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.085 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:25.085 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:25.085 19:44:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.021 00:15:26.021 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.021 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.021 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.021 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.021 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.021 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:26.021 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.021 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:26.021 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.021 { 00:15:26.021 "cntlid": 43, 00:15:26.021 "qid": 0, 00:15:26.021 "state": "enabled", 00:15:26.021 "thread": "nvmf_tgt_poll_group_000", 00:15:26.021 "listen_address": { 00:15:26.021 "trtype": "TCP", 00:15:26.021 "adrfam": "IPv4", 00:15:26.021 "traddr": "10.0.0.2", 00:15:26.021 "trsvcid": "4420" 00:15:26.021 }, 00:15:26.021 "peer_address": { 00:15:26.021 "trtype": "TCP", 00:15:26.021 "adrfam": "IPv4", 00:15:26.021 "traddr": "10.0.0.1", 00:15:26.021 "trsvcid": "35674" 00:15:26.021 }, 00:15:26.021 "auth": { 00:15:26.021 "state": "completed", 00:15:26.021 "digest": "sha256", 00:15:26.021 "dhgroup": "ffdhe8192" 00:15:26.021 } 00:15:26.021 } 00:15:26.021 ]' 00:15:26.021 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.279 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.279 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.279 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.279 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.279 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.279 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.279 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.537 19:44:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:15:27.475 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.475 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.475 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.475 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:27.475 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.475 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:27.475 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.475 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:27.475 19:44:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:27.733 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:27.733 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.733 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:27.733 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:27.733 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:27.733 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.733 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.733 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:27.733 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.733 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:27.733 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.733 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.670 00:15:28.670 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.670 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.670 19:44:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.927 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.927 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.928 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:28.928 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.928 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:28.928 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.928 { 00:15:28.928 "cntlid": 45, 00:15:28.928 "qid": 0, 00:15:28.928 "state": "enabled", 00:15:28.928 "thread": "nvmf_tgt_poll_group_000", 00:15:28.928 "listen_address": { 00:15:28.928 "trtype": "TCP", 00:15:28.928 "adrfam": "IPv4", 00:15:28.928 "traddr": "10.0.0.2", 00:15:28.928 "trsvcid": "4420" 00:15:28.928 }, 00:15:28.928 "peer_address": { 00:15:28.928 "trtype": "TCP", 00:15:28.928 "adrfam": "IPv4", 00:15:28.928 "traddr": "10.0.0.1", 00:15:28.928 "trsvcid": "35708" 00:15:28.928 }, 00:15:28.928 "auth": { 00:15:28.928 "state": "completed", 00:15:28.928 "digest": "sha256", 00:15:28.928 "dhgroup": "ffdhe8192" 00:15:28.928 } 00:15:28.928 } 00:15:28.928 ]' 00:15:28.928 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.928 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:28.928 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.928 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:28.928 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.928 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.928 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.928 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.187 19:44:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:30.565 19:44:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:31.503 00:15:31.503 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.503 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.503 19:44:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.761 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.761 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.762 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:31.762 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.762 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:31.762 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.762 { 00:15:31.762 "cntlid": 47, 00:15:31.762 "qid": 0, 00:15:31.762 "state": "enabled", 00:15:31.762 "thread": "nvmf_tgt_poll_group_000", 00:15:31.762 "listen_address": { 00:15:31.762 "trtype": "TCP", 00:15:31.762 "adrfam": "IPv4", 00:15:31.762 "traddr": "10.0.0.2", 00:15:31.762 "trsvcid": "4420" 00:15:31.762 }, 00:15:31.762 "peer_address": { 00:15:31.762 "trtype": "TCP", 00:15:31.762 "adrfam": "IPv4", 00:15:31.762 "traddr": "10.0.0.1", 00:15:31.762 "trsvcid": "35732" 00:15:31.762 }, 00:15:31.762 "auth": { 00:15:31.762 "state": "completed", 00:15:31.762 "digest": "sha256", 00:15:31.762 "dhgroup": "ffdhe8192" 00:15:31.762 } 00:15:31.762 } 00:15:31.762 ]' 00:15:31.762 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.762 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:31.762 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.762 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:31.762 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.762 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.762 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.762 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.329 19:44:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:15:33.264 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.264 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.264 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:33.264 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.264 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:33.264 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:33.264 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.264 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.264 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.264 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:33.521 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:33.521 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.521 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:33.521 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:33.521 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:33.521 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.521 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.521 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:33.521 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.521 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:33.521 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.521 19:44:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.778 00:15:33.778 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.778 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.778 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.036 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.036 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.036 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:34.036 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.036 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:34.036 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.036 { 00:15:34.036 "cntlid": 49, 00:15:34.036 "qid": 0, 00:15:34.036 "state": "enabled", 00:15:34.036 "thread": "nvmf_tgt_poll_group_000", 00:15:34.036 "listen_address": { 00:15:34.036 "trtype": "TCP", 00:15:34.036 "adrfam": "IPv4", 00:15:34.036 "traddr": "10.0.0.2", 00:15:34.036 "trsvcid": "4420" 00:15:34.036 }, 00:15:34.036 "peer_address": { 00:15:34.036 "trtype": "TCP", 00:15:34.036 "adrfam": "IPv4", 00:15:34.036 "traddr": "10.0.0.1", 00:15:34.036 "trsvcid": "54218" 00:15:34.036 }, 00:15:34.036 "auth": { 00:15:34.036 "state": "completed", 00:15:34.036 "digest": "sha384", 00:15:34.036 "dhgroup": "null" 00:15:34.036 } 00:15:34.036 } 00:15:34.036 ]' 00:15:34.036 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.036 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.036 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.036 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:34.036 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.036 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.036 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.036 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.294 19:44:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:15:35.232 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.232 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:35.232 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:35.232 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.232 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:35.232 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.232 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:35.232 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:35.490 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:35.490 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.490 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:35.490 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:35.490 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:35.490 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.490 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.490 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:35.490 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.490 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:35.490 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.490 19:44:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.058 00:15:36.058 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.058 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.058 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.058 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.058 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.058 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:36.058 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.058 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:36.058 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.058 { 00:15:36.058 "cntlid": 51, 00:15:36.058 "qid": 0, 00:15:36.058 "state": "enabled", 00:15:36.058 "thread": "nvmf_tgt_poll_group_000", 00:15:36.058 "listen_address": { 00:15:36.058 "trtype": "TCP", 00:15:36.058 "adrfam": "IPv4", 00:15:36.058 "traddr": "10.0.0.2", 00:15:36.058 "trsvcid": "4420" 00:15:36.058 }, 00:15:36.058 "peer_address": { 00:15:36.058 "trtype": "TCP", 00:15:36.058 "adrfam": "IPv4", 00:15:36.058 "traddr": "10.0.0.1", 00:15:36.058 "trsvcid": "54238" 00:15:36.058 }, 00:15:36.058 "auth": { 00:15:36.058 "state": "completed", 00:15:36.058 "digest": "sha384", 00:15:36.058 "dhgroup": "null" 00:15:36.058 } 00:15:36.058 } 00:15:36.058 ]' 00:15:36.058 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.315 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.315 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.315 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:36.315 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.315 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.315 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.315 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.573 19:44:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:15:37.510 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.510 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.510 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:37.510 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.510 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:37.510 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.510 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:37.510 19:44:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:37.768 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:37.768 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.768 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:37.768 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:37.768 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:37.768 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.768 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.768 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:37.768 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.768 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:37.769 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.769 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.026 00:15:38.026 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.026 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.026 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.284 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.284 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.284 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:38.284 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.284 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:38.284 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:38.284 { 00:15:38.284 "cntlid": 53, 00:15:38.284 "qid": 0, 00:15:38.284 "state": "enabled", 00:15:38.284 "thread": "nvmf_tgt_poll_group_000", 00:15:38.284 "listen_address": { 00:15:38.284 "trtype": "TCP", 00:15:38.284 "adrfam": "IPv4", 00:15:38.284 "traddr": "10.0.0.2", 00:15:38.284 "trsvcid": "4420" 00:15:38.284 }, 00:15:38.284 "peer_address": { 00:15:38.284 "trtype": "TCP", 00:15:38.284 "adrfam": "IPv4", 00:15:38.284 "traddr": "10.0.0.1", 00:15:38.284 "trsvcid": "54268" 00:15:38.284 }, 00:15:38.284 "auth": { 00:15:38.284 "state": "completed", 00:15:38.284 "digest": "sha384", 00:15:38.284 "dhgroup": "null" 00:15:38.284 } 00:15:38.284 } 00:15:38.284 ]' 00:15:38.284 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:38.284 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.284 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:38.541 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:38.541 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:38.541 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.541 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.541 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.820 19:44:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:15:39.757 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.757 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:39.757 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:39.757 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.757 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:39.757 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.757 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:39.757 19:44:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:40.015 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:40.015 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.015 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.015 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:40.015 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:40.015 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.015 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:40.015 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:40.015 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.015 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:40.015 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.015 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.272 00:15:40.272 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.272 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.272 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.530 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.530 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.530 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:40.530 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.530 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:40.530 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.530 { 00:15:40.530 "cntlid": 55, 00:15:40.530 "qid": 0, 00:15:40.530 "state": "enabled", 00:15:40.530 "thread": "nvmf_tgt_poll_group_000", 00:15:40.530 "listen_address": { 00:15:40.530 "trtype": "TCP", 00:15:40.530 "adrfam": "IPv4", 00:15:40.530 "traddr": "10.0.0.2", 00:15:40.530 "trsvcid": "4420" 00:15:40.530 }, 00:15:40.530 "peer_address": { 00:15:40.530 "trtype": "TCP", 00:15:40.530 "adrfam": "IPv4", 00:15:40.530 "traddr": "10.0.0.1", 00:15:40.530 "trsvcid": "54284" 00:15:40.530 }, 00:15:40.530 "auth": { 00:15:40.530 "state": "completed", 00:15:40.530 "digest": "sha384", 00:15:40.530 "dhgroup": "null" 00:15:40.530 } 00:15:40.530 } 00:15:40.530 ]' 00:15:40.530 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.530 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.530 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.530 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:40.530 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.530 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.530 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.530 19:44:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.788 19:44:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:15:41.720 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.720 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:41.720 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:41.720 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.720 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:41.720 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.720 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.720 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:41.720 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:41.978 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:41.978 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.978 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.978 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:41.978 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:41.978 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.978 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.978 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:41.978 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.978 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:41.978 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.978 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.544 00:15:42.544 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.544 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.544 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.544 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.544 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.544 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:42.544 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.544 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:42.544 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.544 { 00:15:42.544 "cntlid": 57, 00:15:42.544 "qid": 0, 00:15:42.544 "state": "enabled", 00:15:42.544 "thread": "nvmf_tgt_poll_group_000", 00:15:42.544 "listen_address": { 00:15:42.544 "trtype": "TCP", 00:15:42.544 "adrfam": "IPv4", 00:15:42.544 "traddr": "10.0.0.2", 00:15:42.544 "trsvcid": "4420" 00:15:42.544 }, 00:15:42.544 "peer_address": { 00:15:42.544 "trtype": "TCP", 00:15:42.544 "adrfam": "IPv4", 00:15:42.544 "traddr": "10.0.0.1", 00:15:42.544 "trsvcid": "45906" 00:15:42.544 }, 00:15:42.544 "auth": { 00:15:42.544 "state": "completed", 00:15:42.544 "digest": "sha384", 00:15:42.544 "dhgroup": "ffdhe2048" 00:15:42.544 } 00:15:42.544 } 00:15:42.544 ]' 00:15:42.544 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.802 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.802 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.802 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:42.802 19:44:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.802 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.802 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.802 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.061 19:45:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:15:43.996 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.996 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.996 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:43.996 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.996 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:43.996 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.996 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:43.996 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:44.254 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:44.254 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.254 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:44.254 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:44.254 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:44.254 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.254 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.254 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:44.254 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.255 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:44.255 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.255 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.512 00:15:44.513 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.513 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.513 19:45:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.771 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.771 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.771 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:44.771 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.771 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:44.771 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.771 { 00:15:44.771 "cntlid": 59, 00:15:44.771 "qid": 0, 00:15:44.771 "state": "enabled", 00:15:44.771 "thread": "nvmf_tgt_poll_group_000", 00:15:44.771 "listen_address": { 00:15:44.771 "trtype": "TCP", 00:15:44.771 "adrfam": "IPv4", 00:15:44.771 "traddr": "10.0.0.2", 00:15:44.771 "trsvcid": "4420" 00:15:44.771 }, 00:15:44.771 "peer_address": { 00:15:44.771 "trtype": "TCP", 00:15:44.771 "adrfam": "IPv4", 00:15:44.771 "traddr": "10.0.0.1", 00:15:44.771 "trsvcid": "45934" 00:15:44.771 }, 00:15:44.771 "auth": { 00:15:44.771 "state": "completed", 00:15:44.771 "digest": "sha384", 00:15:44.771 "dhgroup": "ffdhe2048" 00:15:44.771 } 00:15:44.771 } 00:15:44.771 ]' 00:15:44.771 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.771 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.771 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.029 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:45.029 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.029 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.029 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.029 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.287 19:45:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:15:46.220 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.220 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.220 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:46.220 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.220 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:46.220 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.220 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:46.220 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:46.478 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:46.478 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.478 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:46.478 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:46.478 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:46.478 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.478 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.478 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:46.478 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.478 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:46.478 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.478 19:45:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.736 00:15:46.736 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.736 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.736 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.994 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.994 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.994 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:46.994 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.994 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:46.994 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.994 { 00:15:46.994 "cntlid": 61, 00:15:46.994 "qid": 0, 00:15:46.994 "state": "enabled", 00:15:46.994 "thread": "nvmf_tgt_poll_group_000", 00:15:46.994 "listen_address": { 00:15:46.994 "trtype": "TCP", 00:15:46.994 "adrfam": "IPv4", 00:15:46.994 "traddr": "10.0.0.2", 00:15:46.994 "trsvcid": "4420" 00:15:46.994 }, 00:15:46.994 "peer_address": { 00:15:46.994 "trtype": "TCP", 00:15:46.994 "adrfam": "IPv4", 00:15:46.994 "traddr": "10.0.0.1", 00:15:46.994 "trsvcid": "45948" 00:15:46.994 }, 00:15:46.994 "auth": { 00:15:46.994 "state": "completed", 00:15:46.994 "digest": "sha384", 00:15:46.994 "dhgroup": "ffdhe2048" 00:15:46.994 } 00:15:46.994 } 00:15:46.994 ]' 00:15:46.994 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.994 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.994 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.252 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:47.252 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.252 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.252 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.252 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.510 19:45:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:15:48.447 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.447 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.447 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:48.447 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.447 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:48.447 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.447 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:48.447 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:48.706 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:48.706 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.706 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:48.706 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:48.706 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:48.706 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.706 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:48.706 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:48.706 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.706 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:48.706 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:48.706 19:45:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:48.963 00:15:48.963 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.964 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.964 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.221 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.221 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.221 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:49.221 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.221 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:49.221 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.222 { 00:15:49.222 "cntlid": 63, 00:15:49.222 "qid": 0, 00:15:49.222 "state": "enabled", 00:15:49.222 "thread": "nvmf_tgt_poll_group_000", 00:15:49.222 "listen_address": { 00:15:49.222 "trtype": "TCP", 00:15:49.222 "adrfam": "IPv4", 00:15:49.222 "traddr": "10.0.0.2", 00:15:49.222 "trsvcid": "4420" 00:15:49.222 }, 00:15:49.222 "peer_address": { 00:15:49.222 "trtype": "TCP", 00:15:49.222 "adrfam": "IPv4", 00:15:49.222 "traddr": "10.0.0.1", 00:15:49.222 "trsvcid": "45968" 00:15:49.222 }, 00:15:49.222 "auth": { 00:15:49.222 "state": "completed", 00:15:49.222 "digest": "sha384", 00:15:49.222 "dhgroup": "ffdhe2048" 00:15:49.222 } 00:15:49.222 } 00:15:49.222 ]' 00:15:49.222 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.222 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.222 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.480 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:49.480 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.480 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.480 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.481 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.740 19:45:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:15:50.676 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.676 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.676 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:50.676 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.676 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:50.676 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.676 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.676 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:50.676 19:45:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:50.934 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:50.934 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.934 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:50.934 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:50.934 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:50.934 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.934 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.934 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:50.934 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.934 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:50.934 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.934 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.192 00:15:51.192 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.192 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.192 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.451 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.451 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.451 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:51.451 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.451 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:51.451 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.451 { 00:15:51.451 "cntlid": 65, 00:15:51.451 "qid": 0, 00:15:51.451 "state": "enabled", 00:15:51.451 "thread": "nvmf_tgt_poll_group_000", 00:15:51.451 "listen_address": { 00:15:51.451 "trtype": "TCP", 00:15:51.451 "adrfam": "IPv4", 00:15:51.451 "traddr": "10.0.0.2", 00:15:51.451 "trsvcid": "4420" 00:15:51.451 }, 00:15:51.451 "peer_address": { 00:15:51.451 "trtype": "TCP", 00:15:51.451 "adrfam": "IPv4", 00:15:51.451 "traddr": "10.0.0.1", 00:15:51.451 "trsvcid": "45996" 00:15:51.451 }, 00:15:51.451 "auth": { 00:15:51.451 "state": "completed", 00:15:51.451 "digest": "sha384", 00:15:51.451 "dhgroup": "ffdhe3072" 00:15:51.451 } 00:15:51.451 } 00:15:51.451 ]' 00:15:51.451 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.451 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.451 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.451 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:51.451 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.709 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.709 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.709 19:45:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.969 19:45:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:15:52.908 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.908 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.908 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:52.908 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.908 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:52.908 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.908 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:52.908 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:53.166 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:53.166 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.166 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:53.166 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:53.166 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:53.166 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.166 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.166 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:53.166 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.166 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:53.166 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.166 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:53.424 00:15:53.424 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.424 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.424 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.681 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.681 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.681 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:53.681 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.681 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:53.681 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.681 { 00:15:53.681 "cntlid": 67, 00:15:53.681 "qid": 0, 00:15:53.681 "state": "enabled", 00:15:53.681 "thread": "nvmf_tgt_poll_group_000", 00:15:53.681 "listen_address": { 00:15:53.681 "trtype": "TCP", 00:15:53.681 "adrfam": "IPv4", 00:15:53.681 "traddr": "10.0.0.2", 00:15:53.681 "trsvcid": "4420" 00:15:53.681 }, 00:15:53.681 "peer_address": { 00:15:53.681 "trtype": "TCP", 00:15:53.681 "adrfam": "IPv4", 00:15:53.681 "traddr": "10.0.0.1", 00:15:53.681 "trsvcid": "56974" 00:15:53.681 }, 00:15:53.681 "auth": { 00:15:53.681 "state": "completed", 00:15:53.681 "digest": "sha384", 00:15:53.681 "dhgroup": "ffdhe3072" 00:15:53.681 } 00:15:53.681 } 00:15:53.681 ]' 00:15:53.681 19:45:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.681 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.681 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.681 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:53.681 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.938 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.938 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.938 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.197 19:45:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:15:55.155 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.155 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.155 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:55.155 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.155 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:55.155 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.155 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:55.155 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:55.419 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:55.419 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.419 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:55.419 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:55.419 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:55.419 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.419 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.419 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:55.419 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.419 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:55.419 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.419 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:55.677 00:15:55.677 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.677 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.677 19:45:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.935 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.935 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.935 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:55.935 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.935 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:55.935 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.935 { 00:15:55.935 "cntlid": 69, 00:15:55.935 "qid": 0, 00:15:55.935 "state": "enabled", 00:15:55.935 "thread": "nvmf_tgt_poll_group_000", 00:15:55.935 "listen_address": { 00:15:55.935 "trtype": "TCP", 00:15:55.935 "adrfam": "IPv4", 00:15:55.935 "traddr": "10.0.0.2", 00:15:55.935 "trsvcid": "4420" 00:15:55.935 }, 00:15:55.935 "peer_address": { 00:15:55.935 "trtype": "TCP", 00:15:55.935 "adrfam": "IPv4", 00:15:55.935 "traddr": "10.0.0.1", 00:15:55.935 "trsvcid": "57002" 00:15:55.935 }, 00:15:55.935 "auth": { 00:15:55.935 "state": "completed", 00:15:55.935 "digest": "sha384", 00:15:55.935 "dhgroup": "ffdhe3072" 00:15:55.935 } 00:15:55.935 } 00:15:55.935 ]' 00:15:55.935 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.935 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.935 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.935 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:55.935 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.935 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.935 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.935 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.208 19:45:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:15:57.148 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.148 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.148 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:57.148 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.148 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:57.148 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.148 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:57.148 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:57.408 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:57.408 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.408 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:57.408 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:57.408 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:57.408 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.408 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:57.408 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:57.408 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.667 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:57.667 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.667 19:45:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:57.927 00:15:57.927 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.927 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.927 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.185 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.185 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.185 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:58.185 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.185 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:58.185 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.185 { 00:15:58.185 "cntlid": 71, 00:15:58.185 "qid": 0, 00:15:58.185 "state": "enabled", 00:15:58.185 "thread": "nvmf_tgt_poll_group_000", 00:15:58.185 "listen_address": { 00:15:58.185 "trtype": "TCP", 00:15:58.185 "adrfam": "IPv4", 00:15:58.185 "traddr": "10.0.0.2", 00:15:58.185 "trsvcid": "4420" 00:15:58.185 }, 00:15:58.185 "peer_address": { 00:15:58.185 "trtype": "TCP", 00:15:58.185 "adrfam": "IPv4", 00:15:58.185 "traddr": "10.0.0.1", 00:15:58.185 "trsvcid": "57028" 00:15:58.185 }, 00:15:58.185 "auth": { 00:15:58.185 "state": "completed", 00:15:58.185 "digest": "sha384", 00:15:58.185 "dhgroup": "ffdhe3072" 00:15:58.185 } 00:15:58.185 } 00:15:58.185 ]' 00:15:58.185 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.185 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.185 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.185 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:58.185 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.185 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.185 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.185 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.443 19:45:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:15:59.380 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.380 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.380 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:59.380 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.380 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:59.380 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.380 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.380 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:59.380 19:45:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:59.639 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:59.639 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.639 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:59.639 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:59.639 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:59.639 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.639 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.639 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:15:59.639 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.898 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:15:59.898 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.898 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.157 00:16:00.157 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.157 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.157 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.415 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.415 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.415 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:00.415 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.415 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:00.415 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.415 { 00:16:00.415 "cntlid": 73, 00:16:00.415 "qid": 0, 00:16:00.415 "state": "enabled", 00:16:00.415 "thread": "nvmf_tgt_poll_group_000", 00:16:00.415 "listen_address": { 00:16:00.415 "trtype": "TCP", 00:16:00.415 "adrfam": "IPv4", 00:16:00.415 "traddr": "10.0.0.2", 00:16:00.415 "trsvcid": "4420" 00:16:00.415 }, 00:16:00.415 "peer_address": { 00:16:00.415 "trtype": "TCP", 00:16:00.415 "adrfam": "IPv4", 00:16:00.415 "traddr": "10.0.0.1", 00:16:00.415 "trsvcid": "57048" 00:16:00.415 }, 00:16:00.415 "auth": { 00:16:00.415 "state": "completed", 00:16:00.415 "digest": "sha384", 00:16:00.415 "dhgroup": "ffdhe4096" 00:16:00.415 } 00:16:00.415 } 00:16:00.415 ]' 00:16:00.415 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.415 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.415 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.415 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:00.415 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.415 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.415 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.415 19:45:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.673 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:16:01.607 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.607 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.607 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:01.607 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.867 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:01.867 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.867 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:01.867 19:45:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:01.867 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:16:01.867 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.867 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:01.867 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:01.867 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:01.867 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.867 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.867 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:01.867 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.126 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:02.127 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.127 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.385 00:16:02.385 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.385 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.385 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.643 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.643 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.643 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:02.643 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.643 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:02.643 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.643 { 00:16:02.643 "cntlid": 75, 00:16:02.643 "qid": 0, 00:16:02.643 "state": "enabled", 00:16:02.643 "thread": "nvmf_tgt_poll_group_000", 00:16:02.643 "listen_address": { 00:16:02.643 "trtype": "TCP", 00:16:02.643 "adrfam": "IPv4", 00:16:02.643 "traddr": "10.0.0.2", 00:16:02.643 "trsvcid": "4420" 00:16:02.643 }, 00:16:02.643 "peer_address": { 00:16:02.643 "trtype": "TCP", 00:16:02.643 "adrfam": "IPv4", 00:16:02.643 "traddr": "10.0.0.1", 00:16:02.643 "trsvcid": "57382" 00:16:02.643 }, 00:16:02.643 "auth": { 00:16:02.643 "state": "completed", 00:16:02.643 "digest": "sha384", 00:16:02.643 "dhgroup": "ffdhe4096" 00:16:02.643 } 00:16:02.643 } 00:16:02.643 ]' 00:16:02.643 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.643 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.643 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.643 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:02.643 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.643 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.643 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.644 19:45:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.901 19:45:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:16:03.839 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.839 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.839 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.839 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:03.839 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.839 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:03.839 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.839 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:03.839 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:04.097 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:16:04.097 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.097 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:04.097 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:04.097 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:04.097 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.097 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.097 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:04.097 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.355 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:04.355 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.355 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.613 00:16:04.613 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.613 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.613 19:45:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.870 19:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.870 19:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.870 19:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:04.870 19:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.870 19:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:04.870 19:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.870 { 00:16:04.870 "cntlid": 77, 00:16:04.870 "qid": 0, 00:16:04.870 "state": "enabled", 00:16:04.871 "thread": "nvmf_tgt_poll_group_000", 00:16:04.871 "listen_address": { 00:16:04.871 "trtype": "TCP", 00:16:04.871 "adrfam": "IPv4", 00:16:04.871 "traddr": "10.0.0.2", 00:16:04.871 "trsvcid": "4420" 00:16:04.871 }, 00:16:04.871 "peer_address": { 00:16:04.871 "trtype": "TCP", 00:16:04.871 "adrfam": "IPv4", 00:16:04.871 "traddr": "10.0.0.1", 00:16:04.871 "trsvcid": "57406" 00:16:04.871 }, 00:16:04.871 "auth": { 00:16:04.871 "state": "completed", 00:16:04.871 "digest": "sha384", 00:16:04.871 "dhgroup": "ffdhe4096" 00:16:04.871 } 00:16:04.871 } 00:16:04.871 ]' 00:16:04.871 19:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.871 19:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.871 19:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.871 19:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:04.871 19:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.871 19:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.871 19:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.871 19:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.130 19:45:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:16:06.064 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.064 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.064 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:06.064 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.064 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:06.064 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.064 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.064 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:06.321 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:16:06.321 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.321 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:06.321 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:06.321 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:06.321 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.321 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:06.321 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:06.321 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.579 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:06.579 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:06.579 19:45:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:06.837 00:16:06.837 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.837 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.837 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.095 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.095 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.095 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:07.095 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.095 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:07.095 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.095 { 00:16:07.095 "cntlid": 79, 00:16:07.095 "qid": 0, 00:16:07.095 "state": "enabled", 00:16:07.095 "thread": "nvmf_tgt_poll_group_000", 00:16:07.095 "listen_address": { 00:16:07.095 "trtype": "TCP", 00:16:07.095 "adrfam": "IPv4", 00:16:07.095 "traddr": "10.0.0.2", 00:16:07.095 "trsvcid": "4420" 00:16:07.095 }, 00:16:07.095 "peer_address": { 00:16:07.095 "trtype": "TCP", 00:16:07.095 "adrfam": "IPv4", 00:16:07.095 "traddr": "10.0.0.1", 00:16:07.095 "trsvcid": "57428" 00:16:07.095 }, 00:16:07.095 "auth": { 00:16:07.095 "state": "completed", 00:16:07.095 "digest": "sha384", 00:16:07.095 "dhgroup": "ffdhe4096" 00:16:07.095 } 00:16:07.095 } 00:16:07.095 ]' 00:16:07.095 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.095 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:07.095 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.095 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:07.095 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.095 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.095 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.095 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.659 19:45:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:16:08.592 19:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.592 19:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:08.592 19:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:08.592 19:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.592 19:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:08.592 19:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:08.592 19:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.592 19:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.592 19:45:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:08.849 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:16:08.849 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.849 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:08.849 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:08.849 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:08.849 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.849 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.849 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:08.849 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.849 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:08.849 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:08.849 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.416 00:16:09.416 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.416 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.416 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.673 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.673 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.673 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:09.673 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.673 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:09.673 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.673 { 00:16:09.673 "cntlid": 81, 00:16:09.673 "qid": 0, 00:16:09.673 "state": "enabled", 00:16:09.673 "thread": "nvmf_tgt_poll_group_000", 00:16:09.673 "listen_address": { 00:16:09.673 "trtype": "TCP", 00:16:09.673 "adrfam": "IPv4", 00:16:09.673 "traddr": "10.0.0.2", 00:16:09.673 "trsvcid": "4420" 00:16:09.673 }, 00:16:09.673 "peer_address": { 00:16:09.673 "trtype": "TCP", 00:16:09.673 "adrfam": "IPv4", 00:16:09.673 "traddr": "10.0.0.1", 00:16:09.673 "trsvcid": "57446" 00:16:09.673 }, 00:16:09.673 "auth": { 00:16:09.673 "state": "completed", 00:16:09.673 "digest": "sha384", 00:16:09.673 "dhgroup": "ffdhe6144" 00:16:09.673 } 00:16:09.673 } 00:16:09.673 ]' 00:16:09.673 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.673 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:09.673 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.673 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:09.673 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.673 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.673 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.673 19:45:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.931 19:45:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:16:10.863 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.863 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.863 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:10.863 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.863 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:10.863 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.863 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:10.863 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:11.120 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:16:11.120 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.120 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:11.120 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:11.120 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:11.120 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.120 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.120 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:11.120 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.120 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:11.120 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.120 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.705 00:16:11.705 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.705 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.705 19:45:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.961 19:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.961 19:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.961 19:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:11.961 19:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.961 19:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:11.961 19:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.961 { 00:16:11.961 "cntlid": 83, 00:16:11.961 "qid": 0, 00:16:11.961 "state": "enabled", 00:16:11.961 "thread": "nvmf_tgt_poll_group_000", 00:16:11.961 "listen_address": { 00:16:11.961 "trtype": "TCP", 00:16:11.961 "adrfam": "IPv4", 00:16:11.961 "traddr": "10.0.0.2", 00:16:11.961 "trsvcid": "4420" 00:16:11.961 }, 00:16:11.961 "peer_address": { 00:16:11.961 "trtype": "TCP", 00:16:11.961 "adrfam": "IPv4", 00:16:11.961 "traddr": "10.0.0.1", 00:16:11.961 "trsvcid": "57478" 00:16:11.961 }, 00:16:11.961 "auth": { 00:16:11.961 "state": "completed", 00:16:11.961 "digest": "sha384", 00:16:11.961 "dhgroup": "ffdhe6144" 00:16:11.961 } 00:16:11.961 } 00:16:11.961 ]' 00:16:11.961 19:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.961 19:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.961 19:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.961 19:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:11.961 19:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.218 19:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.218 19:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.218 19:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.475 19:45:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:16:13.408 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.408 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.408 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:13.408 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.408 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:13.408 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.408 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:13.408 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:13.665 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:16:13.665 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.665 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:13.665 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:13.665 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:13.665 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.665 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.665 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:13.665 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.665 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:13.665 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.665 19:45:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.230 00:16:14.230 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.230 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.230 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.488 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.488 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.488 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:14.488 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.488 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:14.488 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.488 { 00:16:14.488 "cntlid": 85, 00:16:14.488 "qid": 0, 00:16:14.488 "state": "enabled", 00:16:14.488 "thread": "nvmf_tgt_poll_group_000", 00:16:14.488 "listen_address": { 00:16:14.488 "trtype": "TCP", 00:16:14.488 "adrfam": "IPv4", 00:16:14.488 "traddr": "10.0.0.2", 00:16:14.488 "trsvcid": "4420" 00:16:14.488 }, 00:16:14.488 "peer_address": { 00:16:14.488 "trtype": "TCP", 00:16:14.488 "adrfam": "IPv4", 00:16:14.488 "traddr": "10.0.0.1", 00:16:14.488 "trsvcid": "51244" 00:16:14.488 }, 00:16:14.488 "auth": { 00:16:14.488 "state": "completed", 00:16:14.488 "digest": "sha384", 00:16:14.488 "dhgroup": "ffdhe6144" 00:16:14.488 } 00:16:14.488 } 00:16:14.488 ]' 00:16:14.488 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.488 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.488 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.488 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:14.488 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.488 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.488 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.488 19:45:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.746 19:45:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:16:16.119 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.119 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.119 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:16.119 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:16.120 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:16.685 00:16:16.685 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.685 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.685 19:45:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.942 19:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.942 19:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.942 19:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:16.942 19:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.942 19:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:16.942 19:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.942 { 00:16:16.942 "cntlid": 87, 00:16:16.942 "qid": 0, 00:16:16.942 "state": "enabled", 00:16:16.942 "thread": "nvmf_tgt_poll_group_000", 00:16:16.942 "listen_address": { 00:16:16.942 "trtype": "TCP", 00:16:16.942 "adrfam": "IPv4", 00:16:16.942 "traddr": "10.0.0.2", 00:16:16.942 "trsvcid": "4420" 00:16:16.942 }, 00:16:16.942 "peer_address": { 00:16:16.942 "trtype": "TCP", 00:16:16.942 "adrfam": "IPv4", 00:16:16.942 "traddr": "10.0.0.1", 00:16:16.942 "trsvcid": "51270" 00:16:16.942 }, 00:16:16.942 "auth": { 00:16:16.942 "state": "completed", 00:16:16.942 "digest": "sha384", 00:16:16.942 "dhgroup": "ffdhe6144" 00:16:16.942 } 00:16:16.942 } 00:16:16.942 ]' 00:16:16.943 19:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.943 19:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:16.943 19:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.943 19:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:16.943 19:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.943 19:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.943 19:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.943 19:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.200 19:45:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:18.573 19:45:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.507 00:16:19.507 19:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.507 19:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.507 19:45:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.764 19:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.764 19:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.764 19:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:19.764 19:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.764 19:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:19.764 19:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.764 { 00:16:19.764 "cntlid": 89, 00:16:19.764 "qid": 0, 00:16:19.764 "state": "enabled", 00:16:19.764 "thread": "nvmf_tgt_poll_group_000", 00:16:19.764 "listen_address": { 00:16:19.764 "trtype": "TCP", 00:16:19.764 "adrfam": "IPv4", 00:16:19.764 "traddr": "10.0.0.2", 00:16:19.765 "trsvcid": "4420" 00:16:19.765 }, 00:16:19.765 "peer_address": { 00:16:19.765 "trtype": "TCP", 00:16:19.765 "adrfam": "IPv4", 00:16:19.765 "traddr": "10.0.0.1", 00:16:19.765 "trsvcid": "51304" 00:16:19.765 }, 00:16:19.765 "auth": { 00:16:19.765 "state": "completed", 00:16:19.765 "digest": "sha384", 00:16:19.765 "dhgroup": "ffdhe8192" 00:16:19.765 } 00:16:19.765 } 00:16:19.765 ]' 00:16:19.765 19:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.765 19:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:19.765 19:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.765 19:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:19.765 19:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.765 19:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.765 19:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.765 19:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.021 19:45:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:16:20.954 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.954 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.954 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:20.954 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.954 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:20.954 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.954 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:20.954 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:21.211 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:16:21.211 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.211 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:21.211 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:21.211 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:21.211 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.211 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.211 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:21.211 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.211 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:21.211 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.211 19:45:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:22.143 00:16:22.143 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.143 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.144 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.401 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.401 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.401 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:22.401 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.401 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:22.401 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.401 { 00:16:22.401 "cntlid": 91, 00:16:22.401 "qid": 0, 00:16:22.401 "state": "enabled", 00:16:22.401 "thread": "nvmf_tgt_poll_group_000", 00:16:22.401 "listen_address": { 00:16:22.401 "trtype": "TCP", 00:16:22.401 "adrfam": "IPv4", 00:16:22.401 "traddr": "10.0.0.2", 00:16:22.401 "trsvcid": "4420" 00:16:22.401 }, 00:16:22.401 "peer_address": { 00:16:22.401 "trtype": "TCP", 00:16:22.401 "adrfam": "IPv4", 00:16:22.401 "traddr": "10.0.0.1", 00:16:22.401 "trsvcid": "51320" 00:16:22.401 }, 00:16:22.401 "auth": { 00:16:22.401 "state": "completed", 00:16:22.401 "digest": "sha384", 00:16:22.401 "dhgroup": "ffdhe8192" 00:16:22.401 } 00:16:22.401 } 00:16:22.401 ]' 00:16:22.401 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.401 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:22.401 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.659 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:22.659 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.659 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.659 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.659 19:45:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.916 19:45:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:16:23.848 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.848 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.848 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:23.848 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.848 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:23.848 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.848 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:23.848 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:24.106 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:24.106 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.106 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:24.106 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:24.106 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:24.106 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.106 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.106 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:24.106 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.106 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:24.106 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.106 19:45:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.038 00:16:25.038 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.038 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.038 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.295 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.295 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.295 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:25.295 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.295 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:25.295 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.295 { 00:16:25.295 "cntlid": 93, 00:16:25.295 "qid": 0, 00:16:25.295 "state": "enabled", 00:16:25.295 "thread": "nvmf_tgt_poll_group_000", 00:16:25.295 "listen_address": { 00:16:25.295 "trtype": "TCP", 00:16:25.295 "adrfam": "IPv4", 00:16:25.295 "traddr": "10.0.0.2", 00:16:25.295 "trsvcid": "4420" 00:16:25.295 }, 00:16:25.295 "peer_address": { 00:16:25.295 "trtype": "TCP", 00:16:25.295 "adrfam": "IPv4", 00:16:25.295 "traddr": "10.0.0.1", 00:16:25.295 "trsvcid": "46504" 00:16:25.295 }, 00:16:25.295 "auth": { 00:16:25.295 "state": "completed", 00:16:25.295 "digest": "sha384", 00:16:25.295 "dhgroup": "ffdhe8192" 00:16:25.296 } 00:16:25.296 } 00:16:25.296 ]' 00:16:25.296 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.296 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.296 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.296 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:25.296 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.296 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.296 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.296 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.553 19:45:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:16:26.487 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.487 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.487 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:26.487 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.487 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:26.487 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.487 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:26.487 19:45:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:26.778 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:26.778 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.778 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:26.778 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:26.778 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:26.778 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.778 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:26.778 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:26.778 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.778 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:26.778 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.778 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.712 00:16:27.712 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.712 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.712 19:45:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.970 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.970 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.970 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:27.970 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.970 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:27.970 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.970 { 00:16:27.970 "cntlid": 95, 00:16:27.970 "qid": 0, 00:16:27.970 "state": "enabled", 00:16:27.970 "thread": "nvmf_tgt_poll_group_000", 00:16:27.970 "listen_address": { 00:16:27.970 "trtype": "TCP", 00:16:27.970 "adrfam": "IPv4", 00:16:27.970 "traddr": "10.0.0.2", 00:16:27.970 "trsvcid": "4420" 00:16:27.970 }, 00:16:27.970 "peer_address": { 00:16:27.970 "trtype": "TCP", 00:16:27.970 "adrfam": "IPv4", 00:16:27.970 "traddr": "10.0.0.1", 00:16:27.970 "trsvcid": "46536" 00:16:27.970 }, 00:16:27.970 "auth": { 00:16:27.970 "state": "completed", 00:16:27.970 "digest": "sha384", 00:16:27.970 "dhgroup": "ffdhe8192" 00:16:27.970 } 00:16:27.970 } 00:16:27.970 ]' 00:16:27.970 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.970 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.970 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.970 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:27.970 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.228 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.228 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.228 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.485 19:45:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:16:29.418 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.418 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.418 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:29.418 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.418 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:29.418 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:29.418 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.418 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.418 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:29.418 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:29.676 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:29.676 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.676 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:29.676 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:29.676 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:29.676 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.676 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.676 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:29.676 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.676 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:29.676 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.676 19:45:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.934 00:16:29.934 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.934 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.934 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.192 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.192 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.192 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:30.192 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.192 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:30.192 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.192 { 00:16:30.192 "cntlid": 97, 00:16:30.192 "qid": 0, 00:16:30.192 "state": "enabled", 00:16:30.192 "thread": "nvmf_tgt_poll_group_000", 00:16:30.192 "listen_address": { 00:16:30.192 "trtype": "TCP", 00:16:30.192 "adrfam": "IPv4", 00:16:30.192 "traddr": "10.0.0.2", 00:16:30.192 "trsvcid": "4420" 00:16:30.192 }, 00:16:30.192 "peer_address": { 00:16:30.192 "trtype": "TCP", 00:16:30.192 "adrfam": "IPv4", 00:16:30.192 "traddr": "10.0.0.1", 00:16:30.192 "trsvcid": "46562" 00:16:30.192 }, 00:16:30.192 "auth": { 00:16:30.192 "state": "completed", 00:16:30.192 "digest": "sha512", 00:16:30.192 "dhgroup": "null" 00:16:30.192 } 00:16:30.192 } 00:16:30.192 ]' 00:16:30.192 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.192 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.192 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.192 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:30.192 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.192 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.192 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.192 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.450 19:45:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:16:31.383 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.383 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.383 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:31.383 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.383 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:31.383 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.383 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:31.383 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:31.641 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:31.641 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.641 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:31.641 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:31.641 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:31.641 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.641 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.641 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:31.641 19:45:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.641 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:31.641 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.641 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.210 00:16:32.210 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.210 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.210 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.210 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.210 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.210 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:32.210 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.210 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:32.210 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.210 { 00:16:32.210 "cntlid": 99, 00:16:32.210 "qid": 0, 00:16:32.210 "state": "enabled", 00:16:32.210 "thread": "nvmf_tgt_poll_group_000", 00:16:32.210 "listen_address": { 00:16:32.210 "trtype": "TCP", 00:16:32.210 "adrfam": "IPv4", 00:16:32.210 "traddr": "10.0.0.2", 00:16:32.210 "trsvcid": "4420" 00:16:32.210 }, 00:16:32.210 "peer_address": { 00:16:32.210 "trtype": "TCP", 00:16:32.210 "adrfam": "IPv4", 00:16:32.210 "traddr": "10.0.0.1", 00:16:32.210 "trsvcid": "46598" 00:16:32.210 }, 00:16:32.210 "auth": { 00:16:32.210 "state": "completed", 00:16:32.210 "digest": "sha512", 00:16:32.210 "dhgroup": "null" 00:16:32.210 } 00:16:32.210 } 00:16:32.210 ]' 00:16:32.210 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.468 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.468 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.468 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:32.468 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.468 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.468 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.468 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.728 19:45:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:16:33.666 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.666 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.666 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:33.666 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.666 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:33.666 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.666 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:33.666 19:45:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:33.924 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:33.924 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.924 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:33.924 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:33.924 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:33.924 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.924 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.924 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:33.924 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.924 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:33.924 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.924 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.182 00:16:34.182 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.182 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.182 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.440 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.440 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.440 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:34.440 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.440 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:34.440 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.440 { 00:16:34.440 "cntlid": 101, 00:16:34.440 "qid": 0, 00:16:34.440 "state": "enabled", 00:16:34.440 "thread": "nvmf_tgt_poll_group_000", 00:16:34.440 "listen_address": { 00:16:34.440 "trtype": "TCP", 00:16:34.440 "adrfam": "IPv4", 00:16:34.440 "traddr": "10.0.0.2", 00:16:34.440 "trsvcid": "4420" 00:16:34.440 }, 00:16:34.440 "peer_address": { 00:16:34.440 "trtype": "TCP", 00:16:34.440 "adrfam": "IPv4", 00:16:34.440 "traddr": "10.0.0.1", 00:16:34.440 "trsvcid": "49420" 00:16:34.440 }, 00:16:34.440 "auth": { 00:16:34.440 "state": "completed", 00:16:34.440 "digest": "sha512", 00:16:34.440 "dhgroup": "null" 00:16:34.440 } 00:16:34.440 } 00:16:34.440 ]' 00:16:34.440 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.440 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.440 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.696 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:34.696 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.696 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.696 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.696 19:45:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.953 19:45:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:16:35.886 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.886 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.886 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:35.886 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.886 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:35.886 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.886 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:35.886 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:36.144 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:36.144 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.144 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:36.144 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:36.144 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:36.144 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.144 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:36.144 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:36.144 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.144 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:36.144 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:36.144 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:36.401 00:16:36.401 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.401 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.401 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.659 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.659 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.659 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:36.659 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.659 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:36.659 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.659 { 00:16:36.659 "cntlid": 103, 00:16:36.659 "qid": 0, 00:16:36.659 "state": "enabled", 00:16:36.659 "thread": "nvmf_tgt_poll_group_000", 00:16:36.659 "listen_address": { 00:16:36.659 "trtype": "TCP", 00:16:36.659 "adrfam": "IPv4", 00:16:36.659 "traddr": "10.0.0.2", 00:16:36.659 "trsvcid": "4420" 00:16:36.659 }, 00:16:36.659 "peer_address": { 00:16:36.659 "trtype": "TCP", 00:16:36.659 "adrfam": "IPv4", 00:16:36.659 "traddr": "10.0.0.1", 00:16:36.659 "trsvcid": "49440" 00:16:36.659 }, 00:16:36.659 "auth": { 00:16:36.659 "state": "completed", 00:16:36.659 "digest": "sha512", 00:16:36.659 "dhgroup": "null" 00:16:36.659 } 00:16:36.659 } 00:16:36.659 ]' 00:16:36.659 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.659 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.659 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.659 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:36.659 19:45:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.659 19:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.659 19:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.659 19:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.916 19:45:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.289 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.547 00:16:38.547 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.547 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.547 19:45:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.805 19:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.805 19:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.805 19:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:38.805 19:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.805 19:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:38.805 19:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.805 { 00:16:38.805 "cntlid": 105, 00:16:38.805 "qid": 0, 00:16:38.805 "state": "enabled", 00:16:38.805 "thread": "nvmf_tgt_poll_group_000", 00:16:38.805 "listen_address": { 00:16:38.805 "trtype": "TCP", 00:16:38.805 "adrfam": "IPv4", 00:16:38.805 "traddr": "10.0.0.2", 00:16:38.805 "trsvcid": "4420" 00:16:38.805 }, 00:16:38.805 "peer_address": { 00:16:38.805 "trtype": "TCP", 00:16:38.805 "adrfam": "IPv4", 00:16:38.805 "traddr": "10.0.0.1", 00:16:38.805 "trsvcid": "49462" 00:16:38.805 }, 00:16:38.805 "auth": { 00:16:38.805 "state": "completed", 00:16:38.805 "digest": "sha512", 00:16:38.805 "dhgroup": "ffdhe2048" 00:16:38.805 } 00:16:38.805 } 00:16:38.805 ]' 00:16:38.805 19:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.805 19:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.805 19:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.805 19:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.805 19:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.063 19:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.063 19:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.063 19:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.321 19:45:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:16:40.253 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.253 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.253 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:40.253 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.253 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:40.253 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.253 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:40.253 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:40.511 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:40.511 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.511 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:40.511 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:40.511 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:40.511 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.511 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.511 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:40.511 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.511 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:40.511 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.511 19:45:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.769 00:16:40.769 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.769 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.769 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.026 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.027 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.027 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:41.027 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.027 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:41.027 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.027 { 00:16:41.027 "cntlid": 107, 00:16:41.027 "qid": 0, 00:16:41.027 "state": "enabled", 00:16:41.027 "thread": "nvmf_tgt_poll_group_000", 00:16:41.027 "listen_address": { 00:16:41.027 "trtype": "TCP", 00:16:41.027 "adrfam": "IPv4", 00:16:41.027 "traddr": "10.0.0.2", 00:16:41.027 "trsvcid": "4420" 00:16:41.027 }, 00:16:41.027 "peer_address": { 00:16:41.027 "trtype": "TCP", 00:16:41.027 "adrfam": "IPv4", 00:16:41.027 "traddr": "10.0.0.1", 00:16:41.027 "trsvcid": "49476" 00:16:41.027 }, 00:16:41.027 "auth": { 00:16:41.027 "state": "completed", 00:16:41.027 "digest": "sha512", 00:16:41.027 "dhgroup": "ffdhe2048" 00:16:41.027 } 00:16:41.027 } 00:16:41.027 ]' 00:16:41.027 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.284 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.284 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.284 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:41.284 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.284 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.284 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.284 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.542 19:45:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:16:42.532 19:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.532 19:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.532 19:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:42.532 19:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.532 19:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:42.532 19:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.532 19:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:42.532 19:45:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:42.790 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:42.790 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.790 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:42.790 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:42.790 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:42.790 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.790 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.790 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:42.790 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.790 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:42.790 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.790 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.049 00:16:43.049 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.049 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.049 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.307 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.307 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.307 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:43.307 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.308 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:43.308 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.308 { 00:16:43.308 "cntlid": 109, 00:16:43.308 "qid": 0, 00:16:43.308 "state": "enabled", 00:16:43.308 "thread": "nvmf_tgt_poll_group_000", 00:16:43.308 "listen_address": { 00:16:43.308 "trtype": "TCP", 00:16:43.308 "adrfam": "IPv4", 00:16:43.308 "traddr": "10.0.0.2", 00:16:43.308 "trsvcid": "4420" 00:16:43.308 }, 00:16:43.308 "peer_address": { 00:16:43.308 "trtype": "TCP", 00:16:43.308 "adrfam": "IPv4", 00:16:43.308 "traddr": "10.0.0.1", 00:16:43.308 "trsvcid": "50536" 00:16:43.308 }, 00:16:43.308 "auth": { 00:16:43.308 "state": "completed", 00:16:43.308 "digest": "sha512", 00:16:43.308 "dhgroup": "ffdhe2048" 00:16:43.308 } 00:16:43.308 } 00:16:43.308 ]' 00:16:43.308 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.565 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.565 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.565 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:43.565 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.565 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.565 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.565 19:46:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.822 19:46:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:16:44.758 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.758 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.758 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:44.759 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.759 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:44.759 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.759 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:44.759 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:45.016 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:45.016 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.016 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:45.016 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:45.016 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:45.016 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.016 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:45.016 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:45.016 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.016 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:45.016 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.016 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.275 00:16:45.275 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.275 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.275 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.533 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.533 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.533 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:45.533 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.533 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:45.533 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.533 { 00:16:45.533 "cntlid": 111, 00:16:45.533 "qid": 0, 00:16:45.533 "state": "enabled", 00:16:45.533 "thread": "nvmf_tgt_poll_group_000", 00:16:45.533 "listen_address": { 00:16:45.533 "trtype": "TCP", 00:16:45.533 "adrfam": "IPv4", 00:16:45.533 "traddr": "10.0.0.2", 00:16:45.533 "trsvcid": "4420" 00:16:45.533 }, 00:16:45.533 "peer_address": { 00:16:45.533 "trtype": "TCP", 00:16:45.533 "adrfam": "IPv4", 00:16:45.533 "traddr": "10.0.0.1", 00:16:45.533 "trsvcid": "50546" 00:16:45.533 }, 00:16:45.533 "auth": { 00:16:45.533 "state": "completed", 00:16:45.533 "digest": "sha512", 00:16:45.533 "dhgroup": "ffdhe2048" 00:16:45.533 } 00:16:45.533 } 00:16:45.533 ]' 00:16:45.533 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.791 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.791 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.791 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:45.791 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.791 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.791 19:46:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.791 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.049 19:46:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:16:46.984 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.984 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.984 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:46.984 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.984 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:46.984 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.984 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.984 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:46.984 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:47.242 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:47.242 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.242 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:47.242 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:47.242 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:47.242 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.242 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.242 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:47.242 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.242 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:47.242 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.242 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.500 00:16:47.500 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.500 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.500 19:46:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.759 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.759 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.759 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:47.759 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.759 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:47.759 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.759 { 00:16:47.759 "cntlid": 113, 00:16:47.759 "qid": 0, 00:16:47.759 "state": "enabled", 00:16:47.759 "thread": "nvmf_tgt_poll_group_000", 00:16:47.759 "listen_address": { 00:16:47.759 "trtype": "TCP", 00:16:47.759 "adrfam": "IPv4", 00:16:47.759 "traddr": "10.0.0.2", 00:16:47.759 "trsvcid": "4420" 00:16:47.759 }, 00:16:47.759 "peer_address": { 00:16:47.759 "trtype": "TCP", 00:16:47.759 "adrfam": "IPv4", 00:16:47.759 "traddr": "10.0.0.1", 00:16:47.759 "trsvcid": "50580" 00:16:47.759 }, 00:16:47.759 "auth": { 00:16:47.759 "state": "completed", 00:16:47.759 "digest": "sha512", 00:16:47.759 "dhgroup": "ffdhe3072" 00:16:47.759 } 00:16:47.759 } 00:16:47.760 ]' 00:16:47.760 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.018 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.018 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.018 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:48.018 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.018 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.018 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.018 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.276 19:46:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:16:49.210 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.210 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:49.210 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:49.210 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.210 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:49.210 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.210 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:49.210 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:49.468 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:49.468 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.468 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:49.468 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:49.468 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:49.468 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.468 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.468 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:49.468 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.468 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:49.468 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.468 19:46:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.726 00:16:49.726 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.726 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.726 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.985 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.985 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.985 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:49.985 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.985 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:49.985 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.985 { 00:16:49.985 "cntlid": 115, 00:16:49.985 "qid": 0, 00:16:49.985 "state": "enabled", 00:16:49.985 "thread": "nvmf_tgt_poll_group_000", 00:16:49.985 "listen_address": { 00:16:49.985 "trtype": "TCP", 00:16:49.985 "adrfam": "IPv4", 00:16:49.985 "traddr": "10.0.0.2", 00:16:49.985 "trsvcid": "4420" 00:16:49.985 }, 00:16:49.985 "peer_address": { 00:16:49.985 "trtype": "TCP", 00:16:49.985 "adrfam": "IPv4", 00:16:49.985 "traddr": "10.0.0.1", 00:16:49.985 "trsvcid": "50608" 00:16:49.985 }, 00:16:49.985 "auth": { 00:16:49.985 "state": "completed", 00:16:49.985 "digest": "sha512", 00:16:49.985 "dhgroup": "ffdhe3072" 00:16:49.985 } 00:16:49.985 } 00:16:49.985 ]' 00:16:49.985 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.985 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.985 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.242 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:50.242 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.242 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.242 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.242 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.501 19:46:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:16:51.436 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.436 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.436 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:51.436 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.436 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:51.436 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.436 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:51.436 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:51.694 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:51.694 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.694 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:51.694 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:51.694 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:51.694 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.694 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.694 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:51.694 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.694 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:51.694 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.694 19:46:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.951 00:16:51.952 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.952 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.952 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.210 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.210 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.210 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:52.210 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.469 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:52.469 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.469 { 00:16:52.469 "cntlid": 117, 00:16:52.469 "qid": 0, 00:16:52.469 "state": "enabled", 00:16:52.469 "thread": "nvmf_tgt_poll_group_000", 00:16:52.469 "listen_address": { 00:16:52.469 "trtype": "TCP", 00:16:52.469 "adrfam": "IPv4", 00:16:52.469 "traddr": "10.0.0.2", 00:16:52.469 "trsvcid": "4420" 00:16:52.469 }, 00:16:52.469 "peer_address": { 00:16:52.469 "trtype": "TCP", 00:16:52.469 "adrfam": "IPv4", 00:16:52.469 "traddr": "10.0.0.1", 00:16:52.469 "trsvcid": "50632" 00:16:52.469 }, 00:16:52.469 "auth": { 00:16:52.469 "state": "completed", 00:16:52.469 "digest": "sha512", 00:16:52.469 "dhgroup": "ffdhe3072" 00:16:52.469 } 00:16:52.469 } 00:16:52.469 ]' 00:16:52.469 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.469 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.469 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.469 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:52.469 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.469 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.469 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.469 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.727 19:46:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:16:53.662 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.662 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.662 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:53.662 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.662 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:53.662 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.662 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:53.662 19:46:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:53.919 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:53.919 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.919 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:53.919 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:53.919 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:53.919 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.919 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:53.919 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:53.919 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.919 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:53.919 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:53.919 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:54.486 00:16:54.486 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.486 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.487 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.744 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.744 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.744 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:54.744 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.744 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:54.744 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.744 { 00:16:54.744 "cntlid": 119, 00:16:54.744 "qid": 0, 00:16:54.744 "state": "enabled", 00:16:54.744 "thread": "nvmf_tgt_poll_group_000", 00:16:54.744 "listen_address": { 00:16:54.744 "trtype": "TCP", 00:16:54.744 "adrfam": "IPv4", 00:16:54.744 "traddr": "10.0.0.2", 00:16:54.744 "trsvcid": "4420" 00:16:54.744 }, 00:16:54.744 "peer_address": { 00:16:54.744 "trtype": "TCP", 00:16:54.744 "adrfam": "IPv4", 00:16:54.744 "traddr": "10.0.0.1", 00:16:54.744 "trsvcid": "47046" 00:16:54.744 }, 00:16:54.744 "auth": { 00:16:54.744 "state": "completed", 00:16:54.744 "digest": "sha512", 00:16:54.744 "dhgroup": "ffdhe3072" 00:16:54.744 } 00:16:54.744 } 00:16:54.744 ]' 00:16:54.744 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.744 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.744 19:46:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.744 19:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:54.744 19:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.744 19:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.744 19:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.744 19:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.002 19:46:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:16:55.938 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.938 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.938 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:55.938 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.938 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:55.938 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.938 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.938 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:55.938 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:56.196 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:56.196 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.196 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:56.196 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:56.196 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:56.196 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.196 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.196 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:56.196 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.196 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:56.196 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.196 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.765 00:16:56.765 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.765 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.765 19:46:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.027 19:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.027 19:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.027 19:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:57.027 19:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.027 19:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:57.027 19:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.027 { 00:16:57.027 "cntlid": 121, 00:16:57.027 "qid": 0, 00:16:57.027 "state": "enabled", 00:16:57.027 "thread": "nvmf_tgt_poll_group_000", 00:16:57.027 "listen_address": { 00:16:57.027 "trtype": "TCP", 00:16:57.027 "adrfam": "IPv4", 00:16:57.027 "traddr": "10.0.0.2", 00:16:57.027 "trsvcid": "4420" 00:16:57.027 }, 00:16:57.027 "peer_address": { 00:16:57.027 "trtype": "TCP", 00:16:57.027 "adrfam": "IPv4", 00:16:57.027 "traddr": "10.0.0.1", 00:16:57.027 "trsvcid": "47076" 00:16:57.027 }, 00:16:57.027 "auth": { 00:16:57.027 "state": "completed", 00:16:57.027 "digest": "sha512", 00:16:57.027 "dhgroup": "ffdhe4096" 00:16:57.027 } 00:16:57.027 } 00:16:57.027 ]' 00:16:57.027 19:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.027 19:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.027 19:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.027 19:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:57.027 19:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.027 19:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.027 19:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.027 19:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.285 19:46:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:16:58.221 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.221 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.221 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:58.221 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.221 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:58.221 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.221 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.221 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:58.479 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:58.479 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.479 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:58.479 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:58.479 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:58.479 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.479 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.479 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:58.479 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.479 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:58.479 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.479 19:46:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.046 00:16:59.046 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.046 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.046 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.046 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.046 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.046 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:16:59.046 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.046 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:16:59.046 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.046 { 00:16:59.046 "cntlid": 123, 00:16:59.046 "qid": 0, 00:16:59.046 "state": "enabled", 00:16:59.046 "thread": "nvmf_tgt_poll_group_000", 00:16:59.046 "listen_address": { 00:16:59.046 "trtype": "TCP", 00:16:59.046 "adrfam": "IPv4", 00:16:59.046 "traddr": "10.0.0.2", 00:16:59.046 "trsvcid": "4420" 00:16:59.046 }, 00:16:59.046 "peer_address": { 00:16:59.046 "trtype": "TCP", 00:16:59.046 "adrfam": "IPv4", 00:16:59.046 "traddr": "10.0.0.1", 00:16:59.046 "trsvcid": "47102" 00:16:59.046 }, 00:16:59.046 "auth": { 00:16:59.046 "state": "completed", 00:16:59.046 "digest": "sha512", 00:16:59.046 "dhgroup": "ffdhe4096" 00:16:59.046 } 00:16:59.046 } 00:16:59.046 ]' 00:16:59.046 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.304 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.304 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.304 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:59.304 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.304 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.304 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.305 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.562 19:46:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:17:00.500 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.501 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.501 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:00.501 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.501 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:00.501 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.501 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.501 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.759 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:00.759 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.759 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:00.759 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:00.759 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:00.759 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.759 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.759 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:00.759 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.759 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:00.759 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.759 19:46:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.017 00:17:01.276 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.276 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.276 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.276 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.276 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.276 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:01.276 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.535 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:01.535 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.535 { 00:17:01.535 "cntlid": 125, 00:17:01.535 "qid": 0, 00:17:01.535 "state": "enabled", 00:17:01.535 "thread": "nvmf_tgt_poll_group_000", 00:17:01.535 "listen_address": { 00:17:01.535 "trtype": "TCP", 00:17:01.535 "adrfam": "IPv4", 00:17:01.535 "traddr": "10.0.0.2", 00:17:01.535 "trsvcid": "4420" 00:17:01.535 }, 00:17:01.535 "peer_address": { 00:17:01.535 "trtype": "TCP", 00:17:01.535 "adrfam": "IPv4", 00:17:01.535 "traddr": "10.0.0.1", 00:17:01.535 "trsvcid": "47142" 00:17:01.535 }, 00:17:01.535 "auth": { 00:17:01.535 "state": "completed", 00:17:01.535 "digest": "sha512", 00:17:01.535 "dhgroup": "ffdhe4096" 00:17:01.535 } 00:17:01.535 } 00:17:01.535 ]' 00:17:01.535 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.535 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:01.535 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.535 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:01.535 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.535 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.535 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.535 19:46:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.793 19:46:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:17:02.729 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.729 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.729 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:02.729 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.729 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:02.729 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.729 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:02.729 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:02.987 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:02.987 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.987 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:02.987 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:02.987 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:02.987 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.987 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:02.987 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:02.987 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.987 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:02.987 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.987 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.555 00:17:03.555 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:03.555 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:03.555 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.555 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.555 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.555 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:03.555 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.813 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:03.813 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.813 { 00:17:03.813 "cntlid": 127, 00:17:03.813 "qid": 0, 00:17:03.813 "state": "enabled", 00:17:03.813 "thread": "nvmf_tgt_poll_group_000", 00:17:03.813 "listen_address": { 00:17:03.813 "trtype": "TCP", 00:17:03.813 "adrfam": "IPv4", 00:17:03.813 "traddr": "10.0.0.2", 00:17:03.813 "trsvcid": "4420" 00:17:03.813 }, 00:17:03.813 "peer_address": { 00:17:03.813 "trtype": "TCP", 00:17:03.813 "adrfam": "IPv4", 00:17:03.813 "traddr": "10.0.0.1", 00:17:03.813 "trsvcid": "45004" 00:17:03.813 }, 00:17:03.813 "auth": { 00:17:03.813 "state": "completed", 00:17:03.813 "digest": "sha512", 00:17:03.813 "dhgroup": "ffdhe4096" 00:17:03.813 } 00:17:03.813 } 00:17:03.813 ]' 00:17:03.813 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.813 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.813 19:46:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.813 19:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:03.813 19:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.813 19:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.813 19:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.813 19:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.071 19:46:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:17:05.010 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.010 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:05.010 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:05.010 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.010 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:05.010 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.010 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.010 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.010 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:05.269 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:17:05.269 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.269 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:05.269 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:05.269 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:05.269 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.269 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.269 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:05.269 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.269 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:05.269 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.269 19:46:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.836 00:17:05.836 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.836 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.836 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.094 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.094 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.094 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:06.094 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.094 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:06.094 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.094 { 00:17:06.094 "cntlid": 129, 00:17:06.094 "qid": 0, 00:17:06.094 "state": "enabled", 00:17:06.094 "thread": "nvmf_tgt_poll_group_000", 00:17:06.094 "listen_address": { 00:17:06.094 "trtype": "TCP", 00:17:06.094 "adrfam": "IPv4", 00:17:06.094 "traddr": "10.0.0.2", 00:17:06.094 "trsvcid": "4420" 00:17:06.094 }, 00:17:06.094 "peer_address": { 00:17:06.094 "trtype": "TCP", 00:17:06.094 "adrfam": "IPv4", 00:17:06.094 "traddr": "10.0.0.1", 00:17:06.094 "trsvcid": "45032" 00:17:06.094 }, 00:17:06.094 "auth": { 00:17:06.094 "state": "completed", 00:17:06.094 "digest": "sha512", 00:17:06.094 "dhgroup": "ffdhe6144" 00:17:06.094 } 00:17:06.094 } 00:17:06.094 ]' 00:17:06.094 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.094 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:06.094 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.094 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.094 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.094 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.094 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.094 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.354 19:46:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.733 19:46:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.301 00:17:08.301 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.301 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.301 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.559 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.559 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.559 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:08.560 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.560 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:08.560 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.560 { 00:17:08.560 "cntlid": 131, 00:17:08.560 "qid": 0, 00:17:08.560 "state": "enabled", 00:17:08.560 "thread": "nvmf_tgt_poll_group_000", 00:17:08.560 "listen_address": { 00:17:08.560 "trtype": "TCP", 00:17:08.560 "adrfam": "IPv4", 00:17:08.560 "traddr": "10.0.0.2", 00:17:08.560 "trsvcid": "4420" 00:17:08.560 }, 00:17:08.560 "peer_address": { 00:17:08.560 "trtype": "TCP", 00:17:08.560 "adrfam": "IPv4", 00:17:08.560 "traddr": "10.0.0.1", 00:17:08.560 "trsvcid": "45058" 00:17:08.560 }, 00:17:08.560 "auth": { 00:17:08.560 "state": "completed", 00:17:08.560 "digest": "sha512", 00:17:08.560 "dhgroup": "ffdhe6144" 00:17:08.560 } 00:17:08.560 } 00:17:08.560 ]' 00:17:08.560 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.560 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:08.560 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.560 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:08.560 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.560 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.560 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.560 19:46:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.819 19:46:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:17:09.754 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.754 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.754 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:09.754 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.754 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:09.754 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.754 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:09.754 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:10.012 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:17:10.012 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:10.012 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:10.012 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:10.012 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:10.012 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.012 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.012 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:10.012 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.012 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:10.012 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.012 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.581 00:17:10.581 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.581 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.581 19:46:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.840 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.840 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.840 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:10.840 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.840 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:10.840 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.840 { 00:17:10.840 "cntlid": 133, 00:17:10.840 "qid": 0, 00:17:10.840 "state": "enabled", 00:17:10.840 "thread": "nvmf_tgt_poll_group_000", 00:17:10.840 "listen_address": { 00:17:10.840 "trtype": "TCP", 00:17:10.840 "adrfam": "IPv4", 00:17:10.840 "traddr": "10.0.0.2", 00:17:10.840 "trsvcid": "4420" 00:17:10.840 }, 00:17:10.840 "peer_address": { 00:17:10.840 "trtype": "TCP", 00:17:10.840 "adrfam": "IPv4", 00:17:10.840 "traddr": "10.0.0.1", 00:17:10.840 "trsvcid": "45084" 00:17:10.840 }, 00:17:10.840 "auth": { 00:17:10.840 "state": "completed", 00:17:10.840 "digest": "sha512", 00:17:10.840 "dhgroup": "ffdhe6144" 00:17:10.840 } 00:17:10.840 } 00:17:10.840 ]' 00:17:10.840 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.099 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:11.099 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.099 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.099 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.099 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.099 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.099 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.357 19:46:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:17:12.318 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.318 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.318 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:12.318 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.318 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:12.318 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.318 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:12.318 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:12.577 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:17:12.577 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.577 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:12.577 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:12.577 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:12.577 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.577 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:12.577 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:12.577 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.577 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:12.577 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:12.577 19:46:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:13.142 00:17:13.142 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.142 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.142 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.400 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.400 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.400 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:13.400 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.400 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:13.400 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.400 { 00:17:13.400 "cntlid": 135, 00:17:13.400 "qid": 0, 00:17:13.400 "state": "enabled", 00:17:13.400 "thread": "nvmf_tgt_poll_group_000", 00:17:13.400 "listen_address": { 00:17:13.400 "trtype": "TCP", 00:17:13.400 "adrfam": "IPv4", 00:17:13.400 "traddr": "10.0.0.2", 00:17:13.400 "trsvcid": "4420" 00:17:13.400 }, 00:17:13.400 "peer_address": { 00:17:13.400 "trtype": "TCP", 00:17:13.400 "adrfam": "IPv4", 00:17:13.400 "traddr": "10.0.0.1", 00:17:13.400 "trsvcid": "55076" 00:17:13.400 }, 00:17:13.400 "auth": { 00:17:13.400 "state": "completed", 00:17:13.400 "digest": "sha512", 00:17:13.400 "dhgroup": "ffdhe6144" 00:17:13.400 } 00:17:13.400 } 00:17:13.400 ]' 00:17:13.400 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.400 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.400 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.658 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:13.658 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.658 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.658 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.658 19:46:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.916 19:46:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:17:14.856 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.856 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.856 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:14.856 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.856 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:14.856 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.856 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.856 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:14.856 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:15.114 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:15.114 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.114 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:15.114 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:15.114 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:15.114 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.114 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.114 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:15.114 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.114 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:15.114 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.114 19:46:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.050 00:17:16.050 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.050 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.050 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.306 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.306 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.306 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:16.306 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.306 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:16.306 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.306 { 00:17:16.306 "cntlid": 137, 00:17:16.306 "qid": 0, 00:17:16.306 "state": "enabled", 00:17:16.306 "thread": "nvmf_tgt_poll_group_000", 00:17:16.306 "listen_address": { 00:17:16.306 "trtype": "TCP", 00:17:16.306 "adrfam": "IPv4", 00:17:16.306 "traddr": "10.0.0.2", 00:17:16.306 "trsvcid": "4420" 00:17:16.306 }, 00:17:16.306 "peer_address": { 00:17:16.306 "trtype": "TCP", 00:17:16.306 "adrfam": "IPv4", 00:17:16.306 "traddr": "10.0.0.1", 00:17:16.306 "trsvcid": "55106" 00:17:16.306 }, 00:17:16.306 "auth": { 00:17:16.306 "state": "completed", 00:17:16.306 "digest": "sha512", 00:17:16.306 "dhgroup": "ffdhe8192" 00:17:16.306 } 00:17:16.306 } 00:17:16.306 ]' 00:17:16.307 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.307 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.307 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.307 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.307 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.307 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.307 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.307 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.564 19:46:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:17:17.500 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.500 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.500 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:17.500 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.500 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:17.500 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.500 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:17.500 19:46:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:17.757 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:17.757 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.757 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:17.757 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:17.757 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:17.757 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.757 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.757 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:17.757 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.757 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:17.757 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.758 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:18.695 00:17:18.695 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.695 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.695 19:46:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.953 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.953 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.953 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:18.953 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.953 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:18.953 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.953 { 00:17:18.953 "cntlid": 139, 00:17:18.953 "qid": 0, 00:17:18.953 "state": "enabled", 00:17:18.953 "thread": "nvmf_tgt_poll_group_000", 00:17:18.953 "listen_address": { 00:17:18.953 "trtype": "TCP", 00:17:18.953 "adrfam": "IPv4", 00:17:18.953 "traddr": "10.0.0.2", 00:17:18.953 "trsvcid": "4420" 00:17:18.953 }, 00:17:18.953 "peer_address": { 00:17:18.953 "trtype": "TCP", 00:17:18.953 "adrfam": "IPv4", 00:17:18.953 "traddr": "10.0.0.1", 00:17:18.953 "trsvcid": "55134" 00:17:18.953 }, 00:17:18.953 "auth": { 00:17:18.953 "state": "completed", 00:17:18.953 "digest": "sha512", 00:17:18.953 "dhgroup": "ffdhe8192" 00:17:18.953 } 00:17:18.953 } 00:17:18.953 ]' 00:17:18.953 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.953 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.953 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:18.953 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.953 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:18.953 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.953 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.953 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.211 19:46:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:ZmEzODg5NjExMGJjOGNiNjcyNjg1ZGVjNWM5NDAxMTc3FiBc: --dhchap-ctrl-secret DHHC-1:02:MWIyZTA1MzBjYTAxOThlZDkzNTAwYjc5YTk5YmE4OWZkNzQyZTY5M2RhODc4ZjJjWOJbWw==: 00:17:20.147 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.405 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:20.405 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:20.405 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.405 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:20.406 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:20.406 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.406 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:20.663 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:20.663 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.663 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:20.663 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:20.663 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:20.663 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.663 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.663 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:20.663 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.663 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:20.663 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.663 19:46:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.600 00:17:21.600 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.600 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.600 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:21.858 19:46:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.858 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:21.858 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.858 { 00:17:21.858 "cntlid": 141, 00:17:21.858 "qid": 0, 00:17:21.858 "state": "enabled", 00:17:21.858 "thread": "nvmf_tgt_poll_group_000", 00:17:21.858 "listen_address": { 00:17:21.858 "trtype": "TCP", 00:17:21.858 "adrfam": "IPv4", 00:17:21.858 "traddr": "10.0.0.2", 00:17:21.858 "trsvcid": "4420" 00:17:21.858 }, 00:17:21.858 "peer_address": { 00:17:21.858 "trtype": "TCP", 00:17:21.858 "adrfam": "IPv4", 00:17:21.858 "traddr": "10.0.0.1", 00:17:21.858 "trsvcid": "55166" 00:17:21.858 }, 00:17:21.858 "auth": { 00:17:21.858 "state": "completed", 00:17:21.858 "digest": "sha512", 00:17:21.858 "dhgroup": "ffdhe8192" 00:17:21.858 } 00:17:21.858 } 00:17:21.858 ]' 00:17:21.858 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:21.858 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.858 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:21.858 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.858 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:21.858 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.858 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.858 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.117 19:46:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YmE1Nzk1OWJiOTQ2YjUyODI4MDlkZTAzOWY3ODg5MmJjMDlmNzM4Y2RkY2U5MTE1sR++lg==: --dhchap-ctrl-secret DHHC-1:01:YWQ4ODhiNDQzMTY3M2Y2N2U5ZGQyNTVjNjkxZjUzNWQTVska: 00:17:23.051 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.051 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.051 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:23.051 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.051 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:23.051 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:23.051 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:23.051 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:23.309 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:23.309 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.309 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:23.309 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:23.309 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:23.309 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.309 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:23.309 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:23.309 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.309 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:23.309 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.309 19:46:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:24.243 00:17:24.243 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.243 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.243 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.501 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.501 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.501 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:24.501 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.501 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:24.501 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:24.501 { 00:17:24.501 "cntlid": 143, 00:17:24.501 "qid": 0, 00:17:24.501 "state": "enabled", 00:17:24.501 "thread": "nvmf_tgt_poll_group_000", 00:17:24.501 "listen_address": { 00:17:24.501 "trtype": "TCP", 00:17:24.501 "adrfam": "IPv4", 00:17:24.501 "traddr": "10.0.0.2", 00:17:24.501 "trsvcid": "4420" 00:17:24.501 }, 00:17:24.501 "peer_address": { 00:17:24.501 "trtype": "TCP", 00:17:24.501 "adrfam": "IPv4", 00:17:24.501 "traddr": "10.0.0.1", 00:17:24.501 "trsvcid": "41442" 00:17:24.501 }, 00:17:24.501 "auth": { 00:17:24.501 "state": "completed", 00:17:24.501 "digest": "sha512", 00:17:24.501 "dhgroup": "ffdhe8192" 00:17:24.501 } 00:17:24.501 } 00:17:24.501 ]' 00:17:24.501 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:24.501 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.501 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:24.759 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:24.759 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:24.759 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.759 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.759 19:46:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.017 19:46:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:17:25.951 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.951 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:25.951 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:25.951 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.951 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:25.951 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:25.951 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:25.951 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:25.951 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:25.951 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:25.951 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:26.207 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:26.207 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.207 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:26.207 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:26.207 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:26.207 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.207 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.207 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:26.207 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.207 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:26.207 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.207 19:46:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.143 00:17:27.143 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.143 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.143 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.143 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.143 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.144 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:27.144 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.401 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:27.401 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:27.401 { 00:17:27.401 "cntlid": 145, 00:17:27.401 "qid": 0, 00:17:27.401 "state": "enabled", 00:17:27.401 "thread": "nvmf_tgt_poll_group_000", 00:17:27.401 "listen_address": { 00:17:27.401 "trtype": "TCP", 00:17:27.401 "adrfam": "IPv4", 00:17:27.401 "traddr": "10.0.0.2", 00:17:27.401 "trsvcid": "4420" 00:17:27.401 }, 00:17:27.401 "peer_address": { 00:17:27.401 "trtype": "TCP", 00:17:27.401 "adrfam": "IPv4", 00:17:27.401 "traddr": "10.0.0.1", 00:17:27.401 "trsvcid": "41480" 00:17:27.401 }, 00:17:27.401 "auth": { 00:17:27.401 "state": "completed", 00:17:27.401 "digest": "sha512", 00:17:27.401 "dhgroup": "ffdhe8192" 00:17:27.401 } 00:17:27.401 } 00:17:27.401 ]' 00:17:27.401 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:27.401 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:27.402 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:27.402 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:27.402 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:27.402 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.402 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.402 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.659 19:46:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:ZjE3NWFkMmM5ZGY4NmYxN2ExNmZlNjQ0ZjgyMjQ4MmNjODI2NDYzNjg1NTViOWVl4M2Sfw==: --dhchap-ctrl-secret DHHC-1:03:N2Y0NDIyZTk2OGE3NGIxNTIwN2ZjN2U0NDQyNDYyMWZkMDY1MzU3M2Y3ZDMxMGVhNGZhMGZkYjQ3YTE1NzhlZgMPG7k=: 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # local es=0 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@639 -- # local arg=hostrpc 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # type -t hostrpc 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:28.626 19:46:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:29.563 request: 00:17:29.563 { 00:17:29.563 "name": "nvme0", 00:17:29.563 "trtype": "tcp", 00:17:29.563 "traddr": "10.0.0.2", 00:17:29.563 "adrfam": "ipv4", 00:17:29.563 "trsvcid": "4420", 00:17:29.563 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:29.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:29.563 "prchk_reftag": false, 00:17:29.563 "prchk_guard": false, 00:17:29.563 "hdgst": false, 00:17:29.563 "ddgst": false, 00:17:29.563 "dhchap_key": "key2", 00:17:29.563 "method": "bdev_nvme_attach_controller", 00:17:29.563 "req_id": 1 00:17:29.563 } 00:17:29.563 Got JSON-RPC error response 00:17:29.563 response: 00:17:29.563 { 00:17:29.563 "code": -5, 00:17:29.563 "message": "Input/output error" 00:17:29.563 } 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # es=1 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # local es=0 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@639 -- # local arg=hostrpc 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # type -t hostrpc 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:29.563 19:46:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:30.499 request: 00:17:30.499 { 00:17:30.499 "name": "nvme0", 00:17:30.499 "trtype": "tcp", 00:17:30.499 "traddr": "10.0.0.2", 00:17:30.499 "adrfam": "ipv4", 00:17:30.499 "trsvcid": "4420", 00:17:30.499 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:30.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:30.499 "prchk_reftag": false, 00:17:30.499 "prchk_guard": false, 00:17:30.499 "hdgst": false, 00:17:30.499 "ddgst": false, 00:17:30.499 "dhchap_key": "key1", 00:17:30.499 "dhchap_ctrlr_key": "ckey2", 00:17:30.499 "method": "bdev_nvme_attach_controller", 00:17:30.499 "req_id": 1 00:17:30.499 } 00:17:30.499 Got JSON-RPC error response 00:17:30.499 response: 00:17:30.499 { 00:17:30.499 "code": -5, 00:17:30.499 "message": "Input/output error" 00:17:30.499 } 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # es=1 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # local es=0 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@639 -- # local arg=hostrpc 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # type -t hostrpc 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.499 19:46:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.494 request: 00:17:31.494 { 00:17:31.494 "name": "nvme0", 00:17:31.494 "trtype": "tcp", 00:17:31.494 "traddr": "10.0.0.2", 00:17:31.494 "adrfam": "ipv4", 00:17:31.494 "trsvcid": "4420", 00:17:31.494 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:31.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:31.494 "prchk_reftag": false, 00:17:31.494 "prchk_guard": false, 00:17:31.494 "hdgst": false, 00:17:31.494 "ddgst": false, 00:17:31.494 "dhchap_key": "key1", 00:17:31.494 "dhchap_ctrlr_key": "ckey1", 00:17:31.494 "method": "bdev_nvme_attach_controller", 00:17:31.494 "req_id": 1 00:17:31.494 } 00:17:31.494 Got JSON-RPC error response 00:17:31.494 response: 00:17:31.494 { 00:17:31.494 "code": -5, 00:17:31.494 "message": "Input/output error" 00:17:31.494 } 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # es=1 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1164963 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' -z 1164963 ']' 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # kill -0 1164963 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # uname 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1164963 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1164963' 00:17:31.494 killing process with pid 1164963 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # kill 1164963 00:17:31.494 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@975 -- # wait 1164963 00:17:31.753 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:31.753 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:17:31.753 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@725 -- # xtrace_disable 00:17:31.753 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.753 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@485 -- # nvmfpid=1187680 00:17:31.753 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:31.753 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@486 -- # waitforlisten 1187680 00:17:31.753 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@832 -- # '[' -z 1187680 ']' 00:17:31.753 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.753 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local max_retries=100 00:17:31.753 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.753 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@841 -- # xtrace_disable 00:17:31.753 19:46:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.011 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:17:32.011 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@865 -- # return 0 00:17:32.011 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:17:32.011 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@731 -- # xtrace_disable 00:17:32.011 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.011 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:32.011 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:32.011 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1187680 00:17:32.011 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@832 -- # '[' -z 1187680 ']' 00:17:32.011 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.011 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local max_retries=100 00:17:32.011 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.011 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@841 -- # xtrace_disable 00:17:32.011 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.269 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:17:32.269 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@865 -- # return 0 00:17:32.269 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:32.269 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:32.269 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.269 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:32.269 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:32.269 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.269 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:32.269 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:32.269 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:32.269 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.269 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:32.270 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:32.270 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.270 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:32.270 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.270 19:46:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:33.203 00:17:33.203 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.203 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.203 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.461 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.461 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.461 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:33.461 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.461 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:33.461 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.461 { 00:17:33.461 "cntlid": 1, 00:17:33.461 "qid": 0, 00:17:33.461 "state": "enabled", 00:17:33.461 "thread": "nvmf_tgt_poll_group_000", 00:17:33.461 "listen_address": { 00:17:33.461 "trtype": "TCP", 00:17:33.461 "adrfam": "IPv4", 00:17:33.461 "traddr": "10.0.0.2", 00:17:33.461 "trsvcid": "4420" 00:17:33.461 }, 00:17:33.461 "peer_address": { 00:17:33.461 "trtype": "TCP", 00:17:33.461 "adrfam": "IPv4", 00:17:33.461 "traddr": "10.0.0.1", 00:17:33.461 "trsvcid": "39604" 00:17:33.461 }, 00:17:33.461 "auth": { 00:17:33.461 "state": "completed", 00:17:33.461 "digest": "sha512", 00:17:33.461 "dhgroup": "ffdhe8192" 00:17:33.461 } 00:17:33.461 } 00:17:33.461 ]' 00:17:33.461 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.461 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.461 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.719 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:33.719 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.719 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.719 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.719 19:46:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.977 19:46:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:NzQzYTFjNWZmMjM5MmRlZGY4MzkwMDRiY2U4Mzg2ZTJmODYzMGJjODNjMDhjZjU1ZGE4ZjFhMTJjYmQ2YzQ2OYqsFrs=: 00:17:34.915 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.915 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.915 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:34.915 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.915 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:34.915 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:34.915 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:34.915 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.915 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:34.915 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:34.915 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:35.174 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.174 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # local es=0 00:17:35.174 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.174 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@639 -- # local arg=hostrpc 00:17:35.174 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:17:35.174 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # type -t hostrpc 00:17:35.174 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:17:35.174 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.174 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.432 request: 00:17:35.432 { 00:17:35.432 "name": "nvme0", 00:17:35.432 "trtype": "tcp", 00:17:35.432 "traddr": "10.0.0.2", 00:17:35.432 "adrfam": "ipv4", 00:17:35.432 "trsvcid": "4420", 00:17:35.432 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:35.432 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:35.432 "prchk_reftag": false, 00:17:35.432 "prchk_guard": false, 00:17:35.432 "hdgst": false, 00:17:35.432 "ddgst": false, 00:17:35.432 "dhchap_key": "key3", 00:17:35.432 "method": "bdev_nvme_attach_controller", 00:17:35.432 "req_id": 1 00:17:35.432 } 00:17:35.432 Got JSON-RPC error response 00:17:35.432 response: 00:17:35.432 { 00:17:35.432 "code": -5, 00:17:35.432 "message": "Input/output error" 00:17:35.432 } 00:17:35.432 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # es=1 00:17:35.432 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:17:35.432 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:17:35.432 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:17:35.432 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:35.432 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:35.432 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:35.432 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:35.690 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.690 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # local es=0 00:17:35.690 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.690 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@639 -- # local arg=hostrpc 00:17:35.690 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:17:35.690 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # type -t hostrpc 00:17:35.690 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:17:35.690 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.690 19:46:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.948 request: 00:17:35.948 { 00:17:35.948 "name": "nvme0", 00:17:35.948 "trtype": "tcp", 00:17:35.948 "traddr": "10.0.0.2", 00:17:35.948 "adrfam": "ipv4", 00:17:35.948 "trsvcid": "4420", 00:17:35.948 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:35.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:35.948 "prchk_reftag": false, 00:17:35.948 "prchk_guard": false, 00:17:35.948 "hdgst": false, 00:17:35.948 "ddgst": false, 00:17:35.948 "dhchap_key": "key3", 00:17:35.948 "method": "bdev_nvme_attach_controller", 00:17:35.948 "req_id": 1 00:17:35.948 } 00:17:35.948 Got JSON-RPC error response 00:17:35.948 response: 00:17:35.948 { 00:17:35.948 "code": -5, 00:17:35.948 "message": "Input/output error" 00:17:35.948 } 00:17:35.948 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # es=1 00:17:35.948 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:17:35.948 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:17:35.948 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:17:35.948 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:35.948 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:35.948 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:35.948 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.948 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:35.948 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # local es=0 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@639 -- # local arg=hostrpc 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # type -t hostrpc 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:36.206 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:36.464 request: 00:17:36.464 { 00:17:36.464 "name": "nvme0", 00:17:36.464 "trtype": "tcp", 00:17:36.464 "traddr": "10.0.0.2", 00:17:36.464 "adrfam": "ipv4", 00:17:36.464 "trsvcid": "4420", 00:17:36.464 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:36.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:36.464 "prchk_reftag": false, 00:17:36.464 "prchk_guard": false, 00:17:36.464 "hdgst": false, 00:17:36.464 "ddgst": false, 00:17:36.464 "dhchap_key": "key0", 00:17:36.464 "dhchap_ctrlr_key": "key1", 00:17:36.464 "method": "bdev_nvme_attach_controller", 00:17:36.464 "req_id": 1 00:17:36.464 } 00:17:36.464 Got JSON-RPC error response 00:17:36.464 response: 00:17:36.464 { 00:17:36.464 "code": -5, 00:17:36.464 "message": "Input/output error" 00:17:36.464 } 00:17:36.464 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # es=1 00:17:36.464 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:17:36.464 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:17:36.464 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:17:36.464 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:36.464 19:46:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:36.722 00:17:36.722 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:36.722 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.722 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:36.980 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.980 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.980 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.239 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:37.239 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:37.239 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1165113 00:17:37.239 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' -z 1165113 ']' 00:17:37.239 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # kill -0 1165113 00:17:37.239 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # uname 00:17:37.239 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:17:37.239 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1165113 00:17:37.239 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:17:37.239 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:17:37.239 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1165113' 00:17:37.239 killing process with pid 1165113 00:17:37.239 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # kill 1165113 00:17:37.239 19:46:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@975 -- # wait 1165113 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # nvmfcleanup 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:37.808 rmmod nvme_tcp 00:17:37.808 rmmod nvme_fabrics 00:17:37.808 rmmod nvme_keyring 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # '[' -n 1187680 ']' 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # killprocess 1187680 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@951 -- # '[' -z 1187680 ']' 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # kill -0 1187680 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # uname 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1187680 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1187680' 00:17:37.808 killing process with pid 1187680 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # kill 1187680 00:17:37.808 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@975 -- # wait 1187680 00:17:38.067 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:17:38.067 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:17:38.067 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:17:38.067 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.067 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@282 -- # remove_spdk_ns 00:17:38.067 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.067 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.067 19:46:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.RVJ /tmp/spdk.key-sha256.E1v /tmp/spdk.key-sha384.gem /tmp/spdk.key-sha512.ZHZ /tmp/spdk.key-sha512.SrT /tmp/spdk.key-sha384.JnG /tmp/spdk.key-sha256.Q6F '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:40.607 00:17:40.607 real 3m9.962s 00:17:40.607 user 7m21.617s 00:17:40.607 sys 0m25.037s 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # xtrace_disable 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.607 ************************************ 00:17:40.607 END TEST nvmf_auth_target 00:17:40.607 ************************************ 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:40.607 ************************************ 00:17:40.607 START TEST nvmf_bdevio_no_huge 00:17:40.607 ************************************ 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:40.607 * Looking for test storage... 00:17:40.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.607 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:40.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@452 -- # prepare_net_devs 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # local -g is_hw=no 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # remove_spdk_ns 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # xtrace_disable 00:17:40.608 19:46:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # pci_devs=() 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -a pci_devs 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # pci_net_devs=() 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # pci_drivers=() 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -A pci_drivers 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # net_devs=() 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # local -ga net_devs 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # e810=() 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # local -ga e810 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # x722=() 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # local -ga x722 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # mlx=() 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # local -ga mlx 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:17:42.518 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:42.519 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:42.519 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # [[ up == up ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:42.519 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # [[ up == up ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:42.519 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # is_hw=yes 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:17:42.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:17:42.519 00:17:42.519 --- 10.0.0.2 ping statistics --- 00:17:42.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.519 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:42.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:17:42.519 00:17:42.519 --- 10.0.0.1 ping statistics --- 00:17:42.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.519 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # return 0 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@725 -- # xtrace_disable 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@485 -- # nvmfpid=1190434 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@486 -- # waitforlisten 1190434 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # '[' -z 1190434 ']' 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local max_retries=100 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@841 -- # xtrace_disable 00:17:42.519 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.519 [2024-07-24 19:46:59.667068] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:17:42.519 [2024-07-24 19:46:59.667156] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:42.519 [2024-07-24 19:46:59.740469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:42.519 [2024-07-24 19:46:59.848883] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.519 [2024-07-24 19:46:59.848941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.519 [2024-07-24 19:46:59.848954] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.519 [2024-07-24 19:46:59.848966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.520 [2024-07-24 19:46:59.848975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.520 [2024-07-24 19:46:59.849065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:42.520 [2024-07-24 19:46:59.849130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:42.520 [2024-07-24 19:46:59.849193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:42.520 [2024-07-24 19:46:59.849196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:42.778 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:17:42.778 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@865 -- # return 0 00:17:42.778 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:17:42.779 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@731 -- # xtrace_disable 00:17:42.779 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.779 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.779 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:42.779 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:42.779 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.779 [2024-07-24 19:46:59.978516] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.779 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:42.779 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:42.779 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:42.779 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.779 Malloc0 00:17:42.779 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:42.779 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:42.779 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:42.779 19:46:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:42.779 [2024-07-24 19:47:00.016822] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@536 -- # config=() 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@536 -- # local subsystem config 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:17:42.779 { 00:17:42.779 "params": { 00:17:42.779 "name": "Nvme$subsystem", 00:17:42.779 "trtype": "$TEST_TRANSPORT", 00:17:42.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:42.779 "adrfam": "ipv4", 00:17:42.779 "trsvcid": "$NVMF_PORT", 00:17:42.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:42.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:42.779 "hdgst": ${hdgst:-false}, 00:17:42.779 "ddgst": ${ddgst:-false} 00:17:42.779 }, 00:17:42.779 "method": "bdev_nvme_attach_controller" 00:17:42.779 } 00:17:42.779 EOF 00:17:42.779 )") 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # cat 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # jq . 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@561 -- # IFS=, 00:17:42.779 19:47:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:17:42.779 "params": { 00:17:42.779 "name": "Nvme1", 00:17:42.779 "trtype": "tcp", 00:17:42.779 "traddr": "10.0.0.2", 00:17:42.779 "adrfam": "ipv4", 00:17:42.779 "trsvcid": "4420", 00:17:42.779 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:42.779 "hdgst": false, 00:17:42.779 "ddgst": false 00:17:42.779 }, 00:17:42.779 "method": "bdev_nvme_attach_controller" 00:17:42.779 }' 00:17:42.779 [2024-07-24 19:47:00.064836] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:17:42.779 [2024-07-24 19:47:00.064915] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1190466 ] 00:17:42.779 [2024-07-24 19:47:00.126975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:43.038 [2024-07-24 19:47:00.241902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.038 [2024-07-24 19:47:00.241953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.038 [2024-07-24 19:47:00.241956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.297 I/O targets: 00:17:43.297 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:43.297 00:17:43.297 00:17:43.297 CUnit - A unit testing framework for C - Version 2.1-3 00:17:43.297 http://cunit.sourceforge.net/ 00:17:43.297 00:17:43.297 00:17:43.297 Suite: bdevio tests on: Nvme1n1 00:17:43.297 Test: blockdev write read block ...passed 00:17:43.297 Test: blockdev write zeroes read block ...passed 00:17:43.297 Test: blockdev write zeroes read no split ...passed 00:17:43.297 Test: blockdev write zeroes read split ...passed 00:17:43.298 Test: blockdev write zeroes read split partial ...passed 00:17:43.298 Test: blockdev reset ...[2024-07-24 19:47:00.608649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:43.298 [2024-07-24 19:47:00.608769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x94dfb0 (9): Bad file descriptor 00:17:43.298 [2024-07-24 19:47:00.660964] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:43.298 passed 00:17:43.557 Test: blockdev write read 8 blocks ...passed 00:17:43.557 Test: blockdev write read size > 128k ...passed 00:17:43.557 Test: blockdev write read invalid size ...passed 00:17:43.557 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:43.557 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:43.557 Test: blockdev write read max offset ...passed 00:17:43.557 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:43.557 Test: blockdev writev readv 8 blocks ...passed 00:17:43.557 Test: blockdev writev readv 30 x 1block ...passed 00:17:43.557 Test: blockdev writev readv block ...passed 00:17:43.557 Test: blockdev writev readv size > 128k ...passed 00:17:43.557 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:43.557 Test: blockdev comparev and writev ...[2024-07-24 19:47:00.913770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.557 [2024-07-24 19:47:00.913806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:43.557 [2024-07-24 19:47:00.913831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.557 [2024-07-24 19:47:00.913848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:43.557 [2024-07-24 19:47:00.914207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.557 [2024-07-24 19:47:00.914231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:43.557 [2024-07-24 19:47:00.914263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.557 [2024-07-24 19:47:00.914286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:43.557 [2024-07-24 19:47:00.914618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.557 [2024-07-24 19:47:00.914641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:43.557 [2024-07-24 19:47:00.914664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.557 [2024-07-24 19:47:00.914680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:43.557 [2024-07-24 19:47:00.915024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.557 [2024-07-24 19:47:00.915048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:43.557 [2024-07-24 19:47:00.915070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:43.557 [2024-07-24 19:47:00.915087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:43.816 passed 00:17:43.816 Test: blockdev nvme passthru rw ...passed 00:17:43.816 Test: blockdev nvme passthru vendor specific ...[2024-07-24 19:47:00.997532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:43.816 [2024-07-24 19:47:00.997560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:43.816 [2024-07-24 19:47:00.997717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:43.816 [2024-07-24 19:47:00.997740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:43.816 [2024-07-24 19:47:00.997893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:43.816 [2024-07-24 19:47:00.997917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:43.816 [2024-07-24 19:47:00.998067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:43.816 [2024-07-24 19:47:00.998090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:43.816 passed 00:17:43.816 Test: blockdev nvme admin passthru ...passed 00:17:43.816 Test: blockdev copy ...passed 00:17:43.816 00:17:43.816 Run Summary: Type Total Ran Passed Failed Inactive 00:17:43.816 suites 1 1 n/a 0 0 00:17:43.816 tests 23 23 23 0 0 00:17:43.816 asserts 152 152 152 0 n/a 00:17:43.816 00:17:43.816 Elapsed time = 1.206 seconds 00:17:44.074 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:44.074 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@562 -- # xtrace_disable 00:17:44.074 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:44.074 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:17:44.074 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:44.074 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:44.074 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # nvmfcleanup 00:17:44.074 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:44.074 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:44.074 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:44.074 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:44.074 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:44.074 rmmod nvme_tcp 00:17:44.074 rmmod nvme_fabrics 00:17:44.074 rmmod nvme_keyring 00:17:44.357 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:44.357 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:44.357 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:44.357 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # '[' -n 1190434 ']' 00:17:44.357 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # killprocess 1190434 00:17:44.357 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' -z 1190434 ']' 00:17:44.357 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # kill -0 1190434 00:17:44.357 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # uname 00:17:44.357 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:17:44.357 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1190434 00:17:44.357 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # process_name=reactor_3 00:17:44.357 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@961 -- # '[' reactor_3 = sudo ']' 00:17:44.357 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1190434' 00:17:44.357 killing process with pid 1190434 00:17:44.357 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # kill 1190434 00:17:44.357 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@975 -- # wait 1190434 00:17:44.619 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:17:44.619 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:17:44.619 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:17:44.619 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.619 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@282 -- # remove_spdk_ns 00:17:44.619 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.619 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.619 19:47:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.156 19:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:17:47.156 00:17:47.156 real 0m6.443s 00:17:47.156 user 0m10.760s 00:17:47.156 sys 0m2.415s 00:17:47.156 19:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # xtrace_disable 00:17:47.156 19:47:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 ************************************ 00:17:47.156 END TEST nvmf_bdevio_no_huge 00:17:47.156 ************************************ 00:17:47.156 19:47:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:47.156 19:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:17:47.156 19:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:17:47.156 19:47:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:47.156 ************************************ 00:17:47.156 START TEST nvmf_tls 00:17:47.156 ************************************ 00:17:47.156 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:47.156 * Looking for test storage... 00:17:47.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.156 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.156 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:47.156 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.156 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.156 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.156 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.156 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.156 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:47.157 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@452 -- # prepare_net_devs 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # local -g is_hw=no 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # remove_spdk_ns 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # xtrace_disable 00:17:47.157 19:47:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.066 19:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:49.066 19:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # pci_devs=() 00:17:49.066 19:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -a pci_devs 00:17:49.066 19:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # pci_net_devs=() 00:17:49.066 19:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:17:49.066 19:47:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # pci_drivers=() 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -A pci_drivers 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # net_devs=() 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # local -ga net_devs 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # e810=() 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # local -ga e810 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # x722=() 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # local -ga x722 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # mlx=() 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # local -ga mlx 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:17:49.066 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:49.067 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:49.067 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # [[ up == up ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:49.067 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # [[ up == up ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:49.067 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # is_hw=yes 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:17:49.067 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.067 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:17:49.067 00:17:49.067 --- 10.0.0.2 ping statistics --- 00:17:49.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.067 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.067 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.067 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:17:49.067 00:17:49.067 --- 10.0.0.1 ping statistics --- 00:17:49.067 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.067 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # return 0 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@725 -- # xtrace_disable 00:17:49.067 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.068 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=1192659 00:17:49.068 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:49.068 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 1192659 00:17:49.068 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1192659 ']' 00:17:49.068 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.068 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:17:49.068 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.068 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:17:49.068 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.068 [2024-07-24 19:47:06.238830] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:17:49.068 [2024-07-24 19:47:06.238929] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.068 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.068 [2024-07-24 19:47:06.305560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.068 [2024-07-24 19:47:06.415411] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.068 [2024-07-24 19:47:06.415469] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.068 [2024-07-24 19:47:06.415482] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.068 [2024-07-24 19:47:06.415493] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.068 [2024-07-24 19:47:06.415502] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.068 [2024-07-24 19:47:06.415528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.326 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:17:49.326 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:17:49.326 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:17:49.326 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@731 -- # xtrace_disable 00:17:49.326 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.326 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.326 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:49.326 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:49.584 true 00:17:49.584 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:49.584 19:47:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:49.840 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:49.840 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:49.841 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:50.097 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:50.097 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:50.355 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:50.355 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:50.355 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:50.615 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:50.615 19:47:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:50.873 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:50.874 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:50.874 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:50.874 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:51.132 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:51.132 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:51.132 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:51.392 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:51.392 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:51.651 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:51.651 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:51.651 19:47:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:51.909 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:51.909 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@706 -- # local prefix key digest 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@708 -- # key=00112233445566778899aabbccddeeff 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@708 -- # digest=1 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@709 -- # python - 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@706 -- # local prefix key digest 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@708 -- # key=ffeeddccbbaa99887766554433221100 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@708 -- # digest=1 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@709 -- # python - 00:17:52.166 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:52.167 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:52.167 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.kr4jfbZS8Z 00:17:52.167 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:52.167 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.sftTT6srO9 00:17:52.167 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:52.167 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:52.167 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.kr4jfbZS8Z 00:17:52.167 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.sftTT6srO9 00:17:52.167 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:52.424 19:47:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:52.992 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.kr4jfbZS8Z 00:17:52.992 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.kr4jfbZS8Z 00:17:52.992 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:52.992 [2024-07-24 19:47:10.310628] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.992 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:53.249 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:53.506 [2024-07-24 19:47:10.832035] tcp.c:1030:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:53.506 [2024-07-24 19:47:10.832294] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:53.506 19:47:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:53.764 malloc0 00:17:53.764 19:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:54.021 19:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kr4jfbZS8Z 00:17:54.277 [2024-07-24 19:47:11.565827] tcp.c:3812:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:54.277 19:47:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.kr4jfbZS8Z 00:17:54.277 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.490 Initializing NVMe Controllers 00:18:06.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:06.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:06.490 Initialization complete. Launching workers. 00:18:06.490 ======================================================== 00:18:06.490 Latency(us) 00:18:06.490 Device Information : IOPS MiB/s Average min max 00:18:06.490 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7737.39 30.22 8273.82 1373.85 9630.86 00:18:06.490 ======================================================== 00:18:06.490 Total : 7737.39 30.22 8273.82 1373.85 9630.86 00:18:06.490 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kr4jfbZS8Z 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kr4jfbZS8Z' 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1194436 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1194436 /var/tmp/bdevperf.sock 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1194436 ']' 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:06.490 19:47:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.490 [2024-07-24 19:47:21.757472] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:06.490 [2024-07-24 19:47:21.757566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194436 ] 00:18:06.490 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.490 [2024-07-24 19:47:21.814760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.490 [2024-07-24 19:47:21.923485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.490 19:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:06.490 19:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:18:06.490 19:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kr4jfbZS8Z 00:18:06.490 [2024-07-24 19:47:22.290337] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.490 [2024-07-24 19:47:22.290473] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:06.490 TLSTESTn1 00:18:06.490 19:47:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:06.490 Running I/O for 10 seconds... 00:18:16.499 00:18:16.499 Latency(us) 00:18:16.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.499 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:16.499 Verification LBA range: start 0x0 length 0x2000 00:18:16.499 TLSTESTn1 : 10.02 3433.53 13.41 0.00 0.00 37210.86 8592.50 67186.54 00:18:16.499 =================================================================================================================== 00:18:16.499 Total : 3433.53 13.41 0.00 0.00 37210.86 8592.50 67186.54 00:18:16.499 0 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1194436 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1194436 ']' 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1194436 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1194436 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_2 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_2 = sudo ']' 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1194436' 00:18:16.499 killing process with pid 1194436 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1194436 00:18:16.499 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.499 00:18:16.499 Latency(us) 00:18:16.499 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.499 =================================================================================================================== 00:18:16.499 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:16.499 [2024-07-24 19:47:32.586361] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1194436 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sftTT6srO9 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # local es=0 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sftTT6srO9 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@639 -- # local arg=run_bdevperf 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # type -t run_bdevperf 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sftTT6srO9 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sftTT6srO9' 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1195751 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1195751 /var/tmp/bdevperf.sock 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1195751 ']' 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:16.499 19:47:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.499 [2024-07-24 19:47:32.898399] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:16.499 [2024-07-24 19:47:32.898475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195751 ] 00:18:16.499 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.499 [2024-07-24 19:47:32.954613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.499 [2024-07-24 19:47:33.056495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.499 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:16.499 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:18:16.499 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sftTT6srO9 00:18:16.499 [2024-07-24 19:47:33.434751] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:16.499 [2024-07-24 19:47:33.434873] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:16.499 [2024-07-24 19:47:33.440253] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:16.499 [2024-07-24 19:47:33.440698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a3df90 (107): Transport endpoint is not connected 00:18:16.499 [2024-07-24 19:47:33.441686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a3df90 (9): Bad file descriptor 00:18:16.499 [2024-07-24 19:47:33.442684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:16.499 [2024-07-24 19:47:33.442704] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:16.499 [2024-07-24 19:47:33.442735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:16.499 request: 00:18:16.499 { 00:18:16.499 "name": "TLSTEST", 00:18:16.499 "trtype": "tcp", 00:18:16.499 "traddr": "10.0.0.2", 00:18:16.499 "adrfam": "ipv4", 00:18:16.499 "trsvcid": "4420", 00:18:16.499 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.499 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.499 "prchk_reftag": false, 00:18:16.499 "prchk_guard": false, 00:18:16.499 "hdgst": false, 00:18:16.499 "ddgst": false, 00:18:16.499 "psk": "/tmp/tmp.sftTT6srO9", 00:18:16.499 "method": "bdev_nvme_attach_controller", 00:18:16.499 "req_id": 1 00:18:16.499 } 00:18:16.499 Got JSON-RPC error response 00:18:16.499 response: 00:18:16.500 { 00:18:16.500 "code": -5, 00:18:16.500 "message": "Input/output error" 00:18:16.500 } 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1195751 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1195751 ']' 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1195751 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1195751 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_2 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_2 = sudo ']' 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1195751' 00:18:16.500 killing process with pid 1195751 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1195751 00:18:16.500 Received shutdown signal, test time was about 10.000000 seconds 00:18:16.500 00:18:16.500 Latency(us) 00:18:16.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:16.500 =================================================================================================================== 00:18:16.500 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:16.500 [2024-07-24 19:47:33.496566] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1195751 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # es=1 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kr4jfbZS8Z 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # local es=0 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kr4jfbZS8Z 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@639 -- # local arg=run_bdevperf 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # type -t run_bdevperf 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.kr4jfbZS8Z 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kr4jfbZS8Z' 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1195890 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1195890 /var/tmp/bdevperf.sock 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1195890 ']' 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:16.500 19:47:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.500 [2024-07-24 19:47:33.805811] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:16.500 [2024-07-24 19:47:33.805887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195890 ] 00:18:16.500 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.500 [2024-07-24 19:47:33.865213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.758 [2024-07-24 19:47:33.973567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.758 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:16.758 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:18:16.758 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.kr4jfbZS8Z 00:18:17.017 [2024-07-24 19:47:34.366934] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.017 [2024-07-24 19:47:34.367051] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:17.017 [2024-07-24 19:47:34.372312] tcp.c: 968:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:17.017 [2024-07-24 19:47:34.372344] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:17.017 [2024-07-24 19:47:34.372383] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:17.017 [2024-07-24 19:47:34.372882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7bf90 (107): Transport endpoint is not connected 00:18:17.017 [2024-07-24 19:47:34.373869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7bf90 (9): Bad file descriptor 00:18:17.017 [2024-07-24 19:47:34.374866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:17.017 [2024-07-24 19:47:34.374888] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:17.017 [2024-07-24 19:47:34.374920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:17.017 request: 00:18:17.017 { 00:18:17.017 "name": "TLSTEST", 00:18:17.017 "trtype": "tcp", 00:18:17.017 "traddr": "10.0.0.2", 00:18:17.017 "adrfam": "ipv4", 00:18:17.018 "trsvcid": "4420", 00:18:17.018 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.018 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:17.018 "prchk_reftag": false, 00:18:17.018 "prchk_guard": false, 00:18:17.018 "hdgst": false, 00:18:17.018 "ddgst": false, 00:18:17.018 "psk": "/tmp/tmp.kr4jfbZS8Z", 00:18:17.018 "method": "bdev_nvme_attach_controller", 00:18:17.018 "req_id": 1 00:18:17.018 } 00:18:17.018 Got JSON-RPC error response 00:18:17.018 response: 00:18:17.018 { 00:18:17.018 "code": -5, 00:18:17.018 "message": "Input/output error" 00:18:17.018 } 00:18:17.018 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1195890 00:18:17.018 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1195890 ']' 00:18:17.018 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1195890 00:18:17.278 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:17.278 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:17.278 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1195890 00:18:17.278 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_2 00:18:17.278 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_2 = sudo ']' 00:18:17.278 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1195890' 00:18:17.278 killing process with pid 1195890 00:18:17.278 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1195890 00:18:17.278 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.278 00:18:17.278 Latency(us) 00:18:17.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.278 =================================================================================================================== 00:18:17.278 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:17.278 [2024-07-24 19:47:34.428023] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:17.278 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1195890 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # es=1 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kr4jfbZS8Z 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # local es=0 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kr4jfbZS8Z 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@639 -- # local arg=run_bdevperf 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # type -t run_bdevperf 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.kr4jfbZS8Z 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kr4jfbZS8Z' 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1196024 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1196024 /var/tmp/bdevperf.sock 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1196024 ']' 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:17.537 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.537 [2024-07-24 19:47:34.727544] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:17.537 [2024-07-24 19:47:34.727623] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196024 ] 00:18:17.537 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.537 [2024-07-24 19:47:34.784454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.537 [2024-07-24 19:47:34.888906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.796 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:17.796 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:18:17.796 19:47:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kr4jfbZS8Z 00:18:18.055 [2024-07-24 19:47:35.233324] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.055 [2024-07-24 19:47:35.233428] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:18.055 [2024-07-24 19:47:35.242022] tcp.c: 968:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:18.055 [2024-07-24 19:47:35.242051] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:18.055 [2024-07-24 19:47:35.242104] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:18.055 [2024-07-24 19:47:35.242271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ccef90 (107): Transport endpoint is not connected 00:18:18.055 [2024-07-24 19:47:35.243259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ccef90 (9): Bad file descriptor 00:18:18.055 [2024-07-24 19:47:35.244256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:18.055 [2024-07-24 19:47:35.244276] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:18.055 [2024-07-24 19:47:35.244308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:18.055 request: 00:18:18.055 { 00:18:18.055 "name": "TLSTEST", 00:18:18.056 "trtype": "tcp", 00:18:18.056 "traddr": "10.0.0.2", 00:18:18.056 "adrfam": "ipv4", 00:18:18.056 "trsvcid": "4420", 00:18:18.056 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:18.056 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.056 "prchk_reftag": false, 00:18:18.056 "prchk_guard": false, 00:18:18.056 "hdgst": false, 00:18:18.056 "ddgst": false, 00:18:18.056 "psk": "/tmp/tmp.kr4jfbZS8Z", 00:18:18.056 "method": "bdev_nvme_attach_controller", 00:18:18.056 "req_id": 1 00:18:18.056 } 00:18:18.056 Got JSON-RPC error response 00:18:18.056 response: 00:18:18.056 { 00:18:18.056 "code": -5, 00:18:18.056 "message": "Input/output error" 00:18:18.056 } 00:18:18.056 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1196024 00:18:18.056 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1196024 ']' 00:18:18.056 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1196024 00:18:18.056 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:18.056 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:18.056 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1196024 00:18:18.056 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_2 00:18:18.056 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_2 = sudo ']' 00:18:18.056 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1196024' 00:18:18.056 killing process with pid 1196024 00:18:18.056 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1196024 00:18:18.056 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.056 00:18:18.056 Latency(us) 00:18:18.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.056 =================================================================================================================== 00:18:18.056 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.056 [2024-07-24 19:47:35.293516] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:18.056 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1196024 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # es=1 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # local es=0 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@639 -- # local arg=run_bdevperf 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # type -t run_bdevperf 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1196052 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1196052 /var/tmp/bdevperf.sock 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1196052 ']' 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:18.316 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.316 [2024-07-24 19:47:35.605700] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:18.316 [2024-07-24 19:47:35.605792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196052 ] 00:18:18.316 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.316 [2024-07-24 19:47:35.671889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.575 [2024-07-24 19:47:35.787566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.575 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:18.575 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:18:18.575 19:47:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:18.834 [2024-07-24 19:47:36.144471] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:18.834 [2024-07-24 19:47:36.146735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a7770 (9): Bad file descriptor 00:18:18.834 [2024-07-24 19:47:36.147729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:18.834 [2024-07-24 19:47:36.147751] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:18.834 [2024-07-24 19:47:36.147767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:18.834 request: 00:18:18.834 { 00:18:18.834 "name": "TLSTEST", 00:18:18.834 "trtype": "tcp", 00:18:18.834 "traddr": "10.0.0.2", 00:18:18.834 "adrfam": "ipv4", 00:18:18.834 "trsvcid": "4420", 00:18:18.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:18.834 "prchk_reftag": false, 00:18:18.834 "prchk_guard": false, 00:18:18.834 "hdgst": false, 00:18:18.834 "ddgst": false, 00:18:18.834 "method": "bdev_nvme_attach_controller", 00:18:18.834 "req_id": 1 00:18:18.834 } 00:18:18.834 Got JSON-RPC error response 00:18:18.834 response: 00:18:18.834 { 00:18:18.834 "code": -5, 00:18:18.834 "message": "Input/output error" 00:18:18.834 } 00:18:18.834 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1196052 00:18:18.834 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1196052 ']' 00:18:18.834 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1196052 00:18:18.834 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:18.834 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:18.834 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1196052 00:18:18.834 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_2 00:18:18.834 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_2 = sudo ']' 00:18:18.834 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1196052' 00:18:18.834 killing process with pid 1196052 00:18:18.834 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1196052 00:18:18.834 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.834 00:18:18.834 Latency(us) 00:18:18.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.834 =================================================================================================================== 00:18:18.834 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.834 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1196052 00:18:19.092 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:19.092 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # es=1 00:18:19.092 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:18:19.092 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:18:19.092 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:18:19.092 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1192659 00:18:19.092 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1192659 ']' 00:18:19.092 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1192659 00:18:19.092 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:19.092 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:19.092 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1192659 00:18:19.350 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:18:19.350 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:18:19.350 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1192659' 00:18:19.350 killing process with pid 1192659 00:18:19.350 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1192659 00:18:19.350 [2024-07-24 19:47:36.487238] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:19.350 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1192659 00:18:19.608 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:19.608 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:19.608 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@706 -- # local prefix key digest 00:18:19.608 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@708 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@708 -- # digest=2 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@709 -- # python - 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.lzsIFDHYc9 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.lzsIFDHYc9 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@725 -- # xtrace_disable 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=1196311 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 1196311 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1196311 ']' 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:19.609 19:47:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.609 [2024-07-24 19:47:36.896752] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:19.609 [2024-07-24 19:47:36.896858] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.609 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.609 [2024-07-24 19:47:36.966349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.867 [2024-07-24 19:47:37.081008] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.867 [2024-07-24 19:47:37.081071] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.867 [2024-07-24 19:47:37.081095] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.867 [2024-07-24 19:47:37.081109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.867 [2024-07-24 19:47:37.081120] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.867 [2024-07-24 19:47:37.081150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.804 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:20.804 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:18:20.804 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:18:20.804 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@731 -- # xtrace_disable 00:18:20.804 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.804 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.804 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.lzsIFDHYc9 00:18:20.804 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lzsIFDHYc9 00:18:20.804 19:47:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:20.804 [2024-07-24 19:47:38.086205] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.804 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:21.062 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:21.321 [2024-07-24 19:47:38.611612] tcp.c:1030:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:21.321 [2024-07-24 19:47:38.611854] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.321 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:21.579 malloc0 00:18:21.579 19:47:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:21.838 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lzsIFDHYc9 00:18:22.096 [2024-07-24 19:47:39.424843] tcp.c:3812:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lzsIFDHYc9 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lzsIFDHYc9' 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1196610 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1196610 /var/tmp/bdevperf.sock 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1196610 ']' 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:22.096 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.355 [2024-07-24 19:47:39.486686] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:22.355 [2024-07-24 19:47:39.486772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196610 ] 00:18:22.355 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.355 [2024-07-24 19:47:39.546393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.355 [2024-07-24 19:47:39.658896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.614 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:22.614 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:18:22.614 19:47:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lzsIFDHYc9 00:18:22.872 [2024-07-24 19:47:40.041274] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:22.872 [2024-07-24 19:47:40.041413] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:22.872 TLSTESTn1 00:18:22.872 19:47:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:22.872 Running I/O for 10 seconds... 00:18:35.092 00:18:35.092 Latency(us) 00:18:35.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.092 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:35.092 Verification LBA range: start 0x0 length 0x2000 00:18:35.092 TLSTESTn1 : 10.02 3458.21 13.51 0.00 0.00 36947.60 6213.78 43690.67 00:18:35.092 =================================================================================================================== 00:18:35.092 Total : 3458.21 13.51 0.00 0.00 36947.60 6213.78 43690.67 00:18:35.092 0 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1196610 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1196610 ']' 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1196610 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1196610 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_2 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_2 = sudo ']' 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1196610' 00:18:35.092 killing process with pid 1196610 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1196610 00:18:35.092 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.092 00:18:35.092 Latency(us) 00:18:35.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.092 =================================================================================================================== 00:18:35.092 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.092 [2024-07-24 19:47:50.340291] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1196610 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.lzsIFDHYc9 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lzsIFDHYc9 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # local es=0 00:18:35.092 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lzsIFDHYc9 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@639 -- # local arg=run_bdevperf 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # type -t run_bdevperf 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lzsIFDHYc9 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lzsIFDHYc9' 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1197925 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1197925 /var/tmp/bdevperf.sock 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1197925 ']' 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.093 [2024-07-24 19:47:50.656014] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:35.093 [2024-07-24 19:47:50.656106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197925 ] 00:18:35.093 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.093 [2024-07-24 19:47:50.713503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.093 [2024-07-24 19:47:50.820740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:18:35.093 19:47:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lzsIFDHYc9 00:18:35.093 [2024-07-24 19:47:51.157721] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.093 [2024-07-24 19:47:51.157802] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:35.093 [2024-07-24 19:47:51.157815] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.lzsIFDHYc9 00:18:35.093 request: 00:18:35.093 { 00:18:35.093 "name": "TLSTEST", 00:18:35.093 "trtype": "tcp", 00:18:35.093 "traddr": "10.0.0.2", 00:18:35.093 "adrfam": "ipv4", 00:18:35.093 "trsvcid": "4420", 00:18:35.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:35.093 "prchk_reftag": false, 00:18:35.093 "prchk_guard": false, 00:18:35.093 "hdgst": false, 00:18:35.093 "ddgst": false, 00:18:35.093 "psk": "/tmp/tmp.lzsIFDHYc9", 00:18:35.093 "method": "bdev_nvme_attach_controller", 00:18:35.093 "req_id": 1 00:18:35.093 } 00:18:35.093 Got JSON-RPC error response 00:18:35.093 response: 00:18:35.093 { 00:18:35.093 "code": -1, 00:18:35.093 "message": "Operation not permitted" 00:18:35.093 } 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1197925 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1197925 ']' 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1197925 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1197925 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_2 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_2 = sudo ']' 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1197925' 00:18:35.093 killing process with pid 1197925 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1197925 00:18:35.093 Received shutdown signal, test time was about 10.000000 seconds 00:18:35.093 00:18:35.093 Latency(us) 00:18:35.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.093 =================================================================================================================== 00:18:35.093 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1197925 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # es=1 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1196311 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1196311 ']' 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1196311 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1196311 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1196311' 00:18:35.093 killing process with pid 1196311 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1196311 00:18:35.093 [2024-07-24 19:47:51.489905] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1196311 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@725 -- # xtrace_disable 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=1198068 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 1198068 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1198068 ']' 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:35.093 19:47:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.093 [2024-07-24 19:47:51.831270] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:35.093 [2024-07-24 19:47:51.831380] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.093 EAL: No free 2048 kB hugepages reported on node 1 00:18:35.093 [2024-07-24 19:47:51.899180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.093 [2024-07-24 19:47:52.013777] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.093 [2024-07-24 19:47:52.013844] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.093 [2024-07-24 19:47:52.013869] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.093 [2024-07-24 19:47:52.013882] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.093 [2024-07-24 19:47:52.013894] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.093 [2024-07-24 19:47:52.013931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@731 -- # xtrace_disable 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.lzsIFDHYc9 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # local es=0 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.lzsIFDHYc9 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@639 -- # local arg=setup_nvmf_tgt 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # type -t setup_nvmf_tgt 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # setup_nvmf_tgt /tmp/tmp.lzsIFDHYc9 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lzsIFDHYc9 00:18:35.706 19:47:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:35.706 [2024-07-24 19:47:53.022798] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.706 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:35.963 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:36.221 [2024-07-24 19:47:53.512096] tcp.c:1030:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:36.221 [2024-07-24 19:47:53.512375] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.221 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:36.479 malloc0 00:18:36.479 19:47:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:36.736 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lzsIFDHYc9 00:18:36.995 [2024-07-24 19:47:54.252904] tcp.c:3722:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:36.995 [2024-07-24 19:47:54.252949] tcp.c:3808:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:36.995 [2024-07-24 19:47:54.252994] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:36.995 request: 00:18:36.995 { 00:18:36.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.995 "host": "nqn.2016-06.io.spdk:host1", 00:18:36.995 "psk": "/tmp/tmp.lzsIFDHYc9", 00:18:36.995 "method": "nvmf_subsystem_add_host", 00:18:36.996 "req_id": 1 00:18:36.996 } 00:18:36.996 Got JSON-RPC error response 00:18:36.996 response: 00:18:36.996 { 00:18:36.996 "code": -32603, 00:18:36.996 "message": "Internal error" 00:18:36.996 } 00:18:36.996 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # es=1 00:18:36.996 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:18:36.996 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:18:36.996 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:18:36.996 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1198068 00:18:36.996 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1198068 ']' 00:18:36.996 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1198068 00:18:36.996 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:36.996 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:36.996 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1198068 00:18:36.996 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:18:36.996 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:18:36.996 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1198068' 00:18:36.996 killing process with pid 1198068 00:18:36.996 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1198068 00:18:36.996 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1198068 00:18:37.254 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.lzsIFDHYc9 00:18:37.254 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:37.254 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:18:37.254 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@725 -- # xtrace_disable 00:18:37.254 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.254 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=1198378 00:18:37.254 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:37.254 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 1198378 00:18:37.254 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1198378 ']' 00:18:37.254 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.254 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:37.254 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.254 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:37.254 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.538 [2024-07-24 19:47:54.656986] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:37.538 [2024-07-24 19:47:54.657073] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.538 EAL: No free 2048 kB hugepages reported on node 1 00:18:37.538 [2024-07-24 19:47:54.737170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.538 [2024-07-24 19:47:54.857518] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.538 [2024-07-24 19:47:54.857592] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.538 [2024-07-24 19:47:54.857607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.538 [2024-07-24 19:47:54.857619] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.538 [2024-07-24 19:47:54.857631] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.538 [2024-07-24 19:47:54.857664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:37.803 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:37.803 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:18:37.803 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:18:37.803 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@731 -- # xtrace_disable 00:18:37.803 19:47:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.803 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.803 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.lzsIFDHYc9 00:18:37.803 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lzsIFDHYc9 00:18:37.803 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:38.061 [2024-07-24 19:47:55.265426] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.061 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:38.318 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:38.576 [2024-07-24 19:47:55.750770] tcp.c:1030:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:38.576 [2024-07-24 19:47:55.751041] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.576 19:47:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:38.833 malloc0 00:18:38.833 19:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:39.090 19:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lzsIFDHYc9 00:18:39.348 [2024-07-24 19:47:56.480770] tcp.c:3812:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:39.348 19:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1198657 00:18:39.348 19:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:39.348 19:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:39.348 19:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1198657 /var/tmp/bdevperf.sock 00:18:39.348 19:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1198657 ']' 00:18:39.348 19:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.348 19:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:39.348 19:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.348 19:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:39.348 19:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.348 [2024-07-24 19:47:56.542496] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:39.348 [2024-07-24 19:47:56.542581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198657 ] 00:18:39.348 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.348 [2024-07-24 19:47:56.599093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.348 [2024-07-24 19:47:56.704946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.605 19:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:39.605 19:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:18:39.605 19:47:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lzsIFDHYc9 00:18:39.863 [2024-07-24 19:47:57.041568] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:39.863 [2024-07-24 19:47:57.041698] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:39.863 TLSTESTn1 00:18:39.863 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:40.121 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:40.121 "subsystems": [ 00:18:40.121 { 00:18:40.121 "subsystem": "keyring", 00:18:40.121 "config": [] 00:18:40.121 }, 00:18:40.121 { 00:18:40.121 "subsystem": "iobuf", 00:18:40.121 "config": [ 00:18:40.121 { 00:18:40.121 "method": "iobuf_set_options", 00:18:40.121 "params": { 00:18:40.121 "small_pool_count": 8192, 00:18:40.121 "large_pool_count": 1024, 00:18:40.121 "small_bufsize": 8192, 00:18:40.121 "large_bufsize": 135168 00:18:40.121 } 00:18:40.121 } 00:18:40.121 ] 00:18:40.121 }, 00:18:40.121 { 00:18:40.121 "subsystem": "sock", 00:18:40.121 "config": [ 00:18:40.121 { 00:18:40.121 "method": "sock_set_default_impl", 00:18:40.121 "params": { 00:18:40.121 "impl_name": "posix" 00:18:40.121 } 00:18:40.121 }, 00:18:40.121 { 00:18:40.121 "method": "sock_impl_set_options", 00:18:40.121 "params": { 00:18:40.121 "impl_name": "ssl", 00:18:40.121 "recv_buf_size": 4096, 00:18:40.121 "send_buf_size": 4096, 00:18:40.121 "enable_recv_pipe": true, 00:18:40.121 "enable_quickack": false, 00:18:40.121 "enable_placement_id": 0, 00:18:40.121 "enable_zerocopy_send_server": true, 00:18:40.121 "enable_zerocopy_send_client": false, 00:18:40.121 "zerocopy_threshold": 0, 00:18:40.121 "tls_version": 0, 00:18:40.121 "enable_ktls": false 00:18:40.121 } 00:18:40.121 }, 00:18:40.121 { 00:18:40.121 "method": "sock_impl_set_options", 00:18:40.121 "params": { 00:18:40.121 "impl_name": "posix", 00:18:40.121 "recv_buf_size": 2097152, 00:18:40.121 "send_buf_size": 2097152, 00:18:40.121 "enable_recv_pipe": true, 00:18:40.121 "enable_quickack": false, 00:18:40.121 "enable_placement_id": 0, 00:18:40.121 "enable_zerocopy_send_server": true, 00:18:40.121 "enable_zerocopy_send_client": false, 00:18:40.121 "zerocopy_threshold": 0, 00:18:40.122 "tls_version": 0, 00:18:40.122 "enable_ktls": false 00:18:40.122 } 00:18:40.122 } 00:18:40.122 ] 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "subsystem": "vmd", 00:18:40.122 "config": [] 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "subsystem": "accel", 00:18:40.122 "config": [ 00:18:40.122 { 00:18:40.122 "method": "accel_set_options", 00:18:40.122 "params": { 00:18:40.122 "small_cache_size": 128, 00:18:40.122 "large_cache_size": 16, 00:18:40.122 "task_count": 2048, 00:18:40.122 "sequence_count": 2048, 00:18:40.122 "buf_count": 2048 00:18:40.122 } 00:18:40.122 } 00:18:40.122 ] 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "subsystem": "bdev", 00:18:40.122 "config": [ 00:18:40.122 { 00:18:40.122 "method": "bdev_set_options", 00:18:40.122 "params": { 00:18:40.122 "bdev_io_pool_size": 65535, 00:18:40.122 "bdev_io_cache_size": 256, 00:18:40.122 "bdev_auto_examine": true, 00:18:40.122 "iobuf_small_cache_size": 128, 00:18:40.122 "iobuf_large_cache_size": 16 00:18:40.122 } 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "method": "bdev_raid_set_options", 00:18:40.122 "params": { 00:18:40.122 "process_window_size_kb": 1024, 00:18:40.122 "process_max_bandwidth_mb_sec": 0 00:18:40.122 } 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "method": "bdev_iscsi_set_options", 00:18:40.122 "params": { 00:18:40.122 "timeout_sec": 30 00:18:40.122 } 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "method": "bdev_nvme_set_options", 00:18:40.122 "params": { 00:18:40.122 "action_on_timeout": "none", 00:18:40.122 "timeout_us": 0, 00:18:40.122 "timeout_admin_us": 0, 00:18:40.122 "keep_alive_timeout_ms": 10000, 00:18:40.122 "arbitration_burst": 0, 00:18:40.122 "low_priority_weight": 0, 00:18:40.122 "medium_priority_weight": 0, 00:18:40.122 "high_priority_weight": 0, 00:18:40.122 "nvme_adminq_poll_period_us": 10000, 00:18:40.122 "nvme_ioq_poll_period_us": 0, 00:18:40.122 "io_queue_requests": 0, 00:18:40.122 "delay_cmd_submit": true, 00:18:40.122 "transport_retry_count": 4, 00:18:40.122 "bdev_retry_count": 3, 00:18:40.122 "transport_ack_timeout": 0, 00:18:40.122 "ctrlr_loss_timeout_sec": 0, 00:18:40.122 "reconnect_delay_sec": 0, 00:18:40.122 "fast_io_fail_timeout_sec": 0, 00:18:40.122 "disable_auto_failback": false, 00:18:40.122 "generate_uuids": false, 00:18:40.122 "transport_tos": 0, 00:18:40.122 "nvme_error_stat": false, 00:18:40.122 "rdma_srq_size": 0, 00:18:40.122 "io_path_stat": false, 00:18:40.122 "allow_accel_sequence": false, 00:18:40.122 "rdma_max_cq_size": 0, 00:18:40.122 "rdma_cm_event_timeout_ms": 0, 00:18:40.122 "dhchap_digests": [ 00:18:40.122 "sha256", 00:18:40.122 "sha384", 00:18:40.122 "sha512" 00:18:40.122 ], 00:18:40.122 "dhchap_dhgroups": [ 00:18:40.122 "null", 00:18:40.122 "ffdhe2048", 00:18:40.122 "ffdhe3072", 00:18:40.122 "ffdhe4096", 00:18:40.122 "ffdhe6144", 00:18:40.122 "ffdhe8192" 00:18:40.122 ] 00:18:40.122 } 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "method": "bdev_nvme_set_hotplug", 00:18:40.122 "params": { 00:18:40.122 "period_us": 100000, 00:18:40.122 "enable": false 00:18:40.122 } 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "method": "bdev_malloc_create", 00:18:40.122 "params": { 00:18:40.122 "name": "malloc0", 00:18:40.122 "num_blocks": 8192, 00:18:40.122 "block_size": 4096, 00:18:40.122 "physical_block_size": 4096, 00:18:40.122 "uuid": "c695c10e-94b0-4cd4-8be7-42eadb75646d", 00:18:40.122 "optimal_io_boundary": 0, 00:18:40.122 "md_size": 0, 00:18:40.122 "dif_type": 0, 00:18:40.122 "dif_is_head_of_md": false, 00:18:40.122 "dif_pi_format": 0 00:18:40.122 } 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "method": "bdev_wait_for_examine" 00:18:40.122 } 00:18:40.122 ] 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "subsystem": "nbd", 00:18:40.122 "config": [] 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "subsystem": "scheduler", 00:18:40.122 "config": [ 00:18:40.122 { 00:18:40.122 "method": "framework_set_scheduler", 00:18:40.122 "params": { 00:18:40.122 "name": "static" 00:18:40.122 } 00:18:40.122 } 00:18:40.122 ] 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "subsystem": "nvmf", 00:18:40.122 "config": [ 00:18:40.122 { 00:18:40.122 "method": "nvmf_set_config", 00:18:40.122 "params": { 00:18:40.122 "discovery_filter": "match_any", 00:18:40.122 "admin_cmd_passthru": { 00:18:40.122 "identify_ctrlr": false 00:18:40.122 } 00:18:40.122 } 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "method": "nvmf_set_max_subsystems", 00:18:40.122 "params": { 00:18:40.122 "max_subsystems": 1024 00:18:40.122 } 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "method": "nvmf_set_crdt", 00:18:40.122 "params": { 00:18:40.122 "crdt1": 0, 00:18:40.122 "crdt2": 0, 00:18:40.122 "crdt3": 0 00:18:40.122 } 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "method": "nvmf_create_transport", 00:18:40.122 "params": { 00:18:40.122 "trtype": "TCP", 00:18:40.122 "max_queue_depth": 128, 00:18:40.122 "max_io_qpairs_per_ctrlr": 127, 00:18:40.122 "in_capsule_data_size": 4096, 00:18:40.122 "max_io_size": 131072, 00:18:40.122 "io_unit_size": 131072, 00:18:40.122 "max_aq_depth": 128, 00:18:40.122 "num_shared_buffers": 511, 00:18:40.122 "buf_cache_size": 4294967295, 00:18:40.122 "dif_insert_or_strip": false, 00:18:40.122 "zcopy": false, 00:18:40.122 "c2h_success": false, 00:18:40.122 "sock_priority": 0, 00:18:40.122 "abort_timeout_sec": 1, 00:18:40.122 "ack_timeout": 0, 00:18:40.122 "data_wr_pool_size": 0 00:18:40.122 } 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "method": "nvmf_create_subsystem", 00:18:40.122 "params": { 00:18:40.122 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.122 "allow_any_host": false, 00:18:40.122 "serial_number": "SPDK00000000000001", 00:18:40.122 "model_number": "SPDK bdev Controller", 00:18:40.122 "max_namespaces": 10, 00:18:40.122 "min_cntlid": 1, 00:18:40.122 "max_cntlid": 65519, 00:18:40.122 "ana_reporting": false 00:18:40.122 } 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "method": "nvmf_subsystem_add_host", 00:18:40.122 "params": { 00:18:40.122 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.122 "host": "nqn.2016-06.io.spdk:host1", 00:18:40.122 "psk": "/tmp/tmp.lzsIFDHYc9" 00:18:40.122 } 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "method": "nvmf_subsystem_add_ns", 00:18:40.122 "params": { 00:18:40.122 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.122 "namespace": { 00:18:40.122 "nsid": 1, 00:18:40.122 "bdev_name": "malloc0", 00:18:40.122 "nguid": "C695C10E94B04CD48BE742EADB75646D", 00:18:40.122 "uuid": "c695c10e-94b0-4cd4-8be7-42eadb75646d", 00:18:40.122 "no_auto_visible": false 00:18:40.122 } 00:18:40.122 } 00:18:40.122 }, 00:18:40.122 { 00:18:40.122 "method": "nvmf_subsystem_add_listener", 00:18:40.122 "params": { 00:18:40.122 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.122 "listen_address": { 00:18:40.122 "trtype": "TCP", 00:18:40.122 "adrfam": "IPv4", 00:18:40.122 "traddr": "10.0.0.2", 00:18:40.122 "trsvcid": "4420" 00:18:40.122 }, 00:18:40.122 "secure_channel": true 00:18:40.122 } 00:18:40.122 } 00:18:40.122 ] 00:18:40.122 } 00:18:40.122 ] 00:18:40.122 }' 00:18:40.122 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:40.691 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:40.691 "subsystems": [ 00:18:40.691 { 00:18:40.691 "subsystem": "keyring", 00:18:40.691 "config": [] 00:18:40.691 }, 00:18:40.691 { 00:18:40.691 "subsystem": "iobuf", 00:18:40.691 "config": [ 00:18:40.691 { 00:18:40.691 "method": "iobuf_set_options", 00:18:40.691 "params": { 00:18:40.691 "small_pool_count": 8192, 00:18:40.691 "large_pool_count": 1024, 00:18:40.691 "small_bufsize": 8192, 00:18:40.691 "large_bufsize": 135168 00:18:40.691 } 00:18:40.691 } 00:18:40.691 ] 00:18:40.691 }, 00:18:40.691 { 00:18:40.691 "subsystem": "sock", 00:18:40.691 "config": [ 00:18:40.691 { 00:18:40.691 "method": "sock_set_default_impl", 00:18:40.691 "params": { 00:18:40.691 "impl_name": "posix" 00:18:40.691 } 00:18:40.691 }, 00:18:40.691 { 00:18:40.691 "method": "sock_impl_set_options", 00:18:40.691 "params": { 00:18:40.691 "impl_name": "ssl", 00:18:40.691 "recv_buf_size": 4096, 00:18:40.691 "send_buf_size": 4096, 00:18:40.691 "enable_recv_pipe": true, 00:18:40.691 "enable_quickack": false, 00:18:40.691 "enable_placement_id": 0, 00:18:40.691 "enable_zerocopy_send_server": true, 00:18:40.691 "enable_zerocopy_send_client": false, 00:18:40.691 "zerocopy_threshold": 0, 00:18:40.691 "tls_version": 0, 00:18:40.691 "enable_ktls": false 00:18:40.691 } 00:18:40.691 }, 00:18:40.691 { 00:18:40.691 "method": "sock_impl_set_options", 00:18:40.691 "params": { 00:18:40.691 "impl_name": "posix", 00:18:40.691 "recv_buf_size": 2097152, 00:18:40.691 "send_buf_size": 2097152, 00:18:40.691 "enable_recv_pipe": true, 00:18:40.691 "enable_quickack": false, 00:18:40.691 "enable_placement_id": 0, 00:18:40.691 "enable_zerocopy_send_server": true, 00:18:40.691 "enable_zerocopy_send_client": false, 00:18:40.691 "zerocopy_threshold": 0, 00:18:40.691 "tls_version": 0, 00:18:40.691 "enable_ktls": false 00:18:40.691 } 00:18:40.691 } 00:18:40.691 ] 00:18:40.691 }, 00:18:40.691 { 00:18:40.691 "subsystem": "vmd", 00:18:40.691 "config": [] 00:18:40.691 }, 00:18:40.691 { 00:18:40.691 "subsystem": "accel", 00:18:40.691 "config": [ 00:18:40.691 { 00:18:40.691 "method": "accel_set_options", 00:18:40.691 "params": { 00:18:40.691 "small_cache_size": 128, 00:18:40.691 "large_cache_size": 16, 00:18:40.691 "task_count": 2048, 00:18:40.691 "sequence_count": 2048, 00:18:40.691 "buf_count": 2048 00:18:40.691 } 00:18:40.691 } 00:18:40.691 ] 00:18:40.691 }, 00:18:40.691 { 00:18:40.691 "subsystem": "bdev", 00:18:40.691 "config": [ 00:18:40.691 { 00:18:40.691 "method": "bdev_set_options", 00:18:40.691 "params": { 00:18:40.691 "bdev_io_pool_size": 65535, 00:18:40.691 "bdev_io_cache_size": 256, 00:18:40.691 "bdev_auto_examine": true, 00:18:40.691 "iobuf_small_cache_size": 128, 00:18:40.691 "iobuf_large_cache_size": 16 00:18:40.691 } 00:18:40.691 }, 00:18:40.691 { 00:18:40.691 "method": "bdev_raid_set_options", 00:18:40.691 "params": { 00:18:40.691 "process_window_size_kb": 1024, 00:18:40.691 "process_max_bandwidth_mb_sec": 0 00:18:40.691 } 00:18:40.691 }, 00:18:40.691 { 00:18:40.691 "method": "bdev_iscsi_set_options", 00:18:40.691 "params": { 00:18:40.691 "timeout_sec": 30 00:18:40.691 } 00:18:40.691 }, 00:18:40.691 { 00:18:40.691 "method": "bdev_nvme_set_options", 00:18:40.691 "params": { 00:18:40.691 "action_on_timeout": "none", 00:18:40.691 "timeout_us": 0, 00:18:40.691 "timeout_admin_us": 0, 00:18:40.691 "keep_alive_timeout_ms": 10000, 00:18:40.692 "arbitration_burst": 0, 00:18:40.692 "low_priority_weight": 0, 00:18:40.692 "medium_priority_weight": 0, 00:18:40.692 "high_priority_weight": 0, 00:18:40.692 "nvme_adminq_poll_period_us": 10000, 00:18:40.692 "nvme_ioq_poll_period_us": 0, 00:18:40.692 "io_queue_requests": 512, 00:18:40.692 "delay_cmd_submit": true, 00:18:40.692 "transport_retry_count": 4, 00:18:40.692 "bdev_retry_count": 3, 00:18:40.692 "transport_ack_timeout": 0, 00:18:40.692 "ctrlr_loss_timeout_sec": 0, 00:18:40.692 "reconnect_delay_sec": 0, 00:18:40.692 "fast_io_fail_timeout_sec": 0, 00:18:40.692 "disable_auto_failback": false, 00:18:40.692 "generate_uuids": false, 00:18:40.692 "transport_tos": 0, 00:18:40.692 "nvme_error_stat": false, 00:18:40.692 "rdma_srq_size": 0, 00:18:40.692 "io_path_stat": false, 00:18:40.692 "allow_accel_sequence": false, 00:18:40.692 "rdma_max_cq_size": 0, 00:18:40.692 "rdma_cm_event_timeout_ms": 0, 00:18:40.692 "dhchap_digests": [ 00:18:40.692 "sha256", 00:18:40.692 "sha384", 00:18:40.692 "sha512" 00:18:40.692 ], 00:18:40.692 "dhchap_dhgroups": [ 00:18:40.692 "null", 00:18:40.692 "ffdhe2048", 00:18:40.692 "ffdhe3072", 00:18:40.692 "ffdhe4096", 00:18:40.692 "ffdhe6144", 00:18:40.692 "ffdhe8192" 00:18:40.692 ] 00:18:40.692 } 00:18:40.692 }, 00:18:40.692 { 00:18:40.692 "method": "bdev_nvme_attach_controller", 00:18:40.692 "params": { 00:18:40.692 "name": "TLSTEST", 00:18:40.692 "trtype": "TCP", 00:18:40.692 "adrfam": "IPv4", 00:18:40.692 "traddr": "10.0.0.2", 00:18:40.692 "trsvcid": "4420", 00:18:40.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:40.692 "prchk_reftag": false, 00:18:40.692 "prchk_guard": false, 00:18:40.692 "ctrlr_loss_timeout_sec": 0, 00:18:40.692 "reconnect_delay_sec": 0, 00:18:40.692 "fast_io_fail_timeout_sec": 0, 00:18:40.692 "psk": "/tmp/tmp.lzsIFDHYc9", 00:18:40.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:40.692 "hdgst": false, 00:18:40.692 "ddgst": false 00:18:40.692 } 00:18:40.692 }, 00:18:40.692 { 00:18:40.692 "method": "bdev_nvme_set_hotplug", 00:18:40.692 "params": { 00:18:40.692 "period_us": 100000, 00:18:40.692 "enable": false 00:18:40.692 } 00:18:40.692 }, 00:18:40.692 { 00:18:40.692 "method": "bdev_wait_for_examine" 00:18:40.692 } 00:18:40.692 ] 00:18:40.692 }, 00:18:40.692 { 00:18:40.692 "subsystem": "nbd", 00:18:40.692 "config": [] 00:18:40.692 } 00:18:40.692 ] 00:18:40.692 }' 00:18:40.692 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1198657 00:18:40.692 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1198657 ']' 00:18:40.692 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1198657 00:18:40.692 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:40.692 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:40.692 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1198657 00:18:40.692 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_2 00:18:40.692 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_2 = sudo ']' 00:18:40.692 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1198657' 00:18:40.692 killing process with pid 1198657 00:18:40.692 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1198657 00:18:40.692 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.692 00:18:40.692 Latency(us) 00:18:40.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.692 =================================================================================================================== 00:18:40.692 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:40.692 [2024-07-24 19:47:57.801435] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:40.692 19:47:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1198657 00:18:40.692 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1198378 00:18:40.692 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1198378 ']' 00:18:40.692 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1198378 00:18:40.692 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:40.692 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:40.692 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1198378 00:18:40.950 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:18:40.950 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:18:40.950 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1198378' 00:18:40.950 killing process with pid 1198378 00:18:40.950 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1198378 00:18:40.950 [2024-07-24 19:47:58.092364] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:40.950 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1198378 00:18:41.209 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:41.209 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:18:41.209 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:41.209 "subsystems": [ 00:18:41.209 { 00:18:41.209 "subsystem": "keyring", 00:18:41.209 "config": [] 00:18:41.209 }, 00:18:41.209 { 00:18:41.209 "subsystem": "iobuf", 00:18:41.209 "config": [ 00:18:41.209 { 00:18:41.209 "method": "iobuf_set_options", 00:18:41.209 "params": { 00:18:41.209 "small_pool_count": 8192, 00:18:41.209 "large_pool_count": 1024, 00:18:41.209 "small_bufsize": 8192, 00:18:41.209 "large_bufsize": 135168 00:18:41.209 } 00:18:41.209 } 00:18:41.209 ] 00:18:41.209 }, 00:18:41.209 { 00:18:41.209 "subsystem": "sock", 00:18:41.209 "config": [ 00:18:41.209 { 00:18:41.209 "method": "sock_set_default_impl", 00:18:41.209 "params": { 00:18:41.209 "impl_name": "posix" 00:18:41.209 } 00:18:41.209 }, 00:18:41.209 { 00:18:41.209 "method": "sock_impl_set_options", 00:18:41.209 "params": { 00:18:41.209 "impl_name": "ssl", 00:18:41.209 "recv_buf_size": 4096, 00:18:41.209 "send_buf_size": 4096, 00:18:41.209 "enable_recv_pipe": true, 00:18:41.209 "enable_quickack": false, 00:18:41.209 "enable_placement_id": 0, 00:18:41.209 "enable_zerocopy_send_server": true, 00:18:41.209 "enable_zerocopy_send_client": false, 00:18:41.209 "zerocopy_threshold": 0, 00:18:41.209 "tls_version": 0, 00:18:41.209 "enable_ktls": false 00:18:41.209 } 00:18:41.209 }, 00:18:41.209 { 00:18:41.209 "method": "sock_impl_set_options", 00:18:41.210 "params": { 00:18:41.210 "impl_name": "posix", 00:18:41.210 "recv_buf_size": 2097152, 00:18:41.210 "send_buf_size": 2097152, 00:18:41.210 "enable_recv_pipe": true, 00:18:41.210 "enable_quickack": false, 00:18:41.210 "enable_placement_id": 0, 00:18:41.210 "enable_zerocopy_send_server": true, 00:18:41.210 "enable_zerocopy_send_client": false, 00:18:41.210 "zerocopy_threshold": 0, 00:18:41.210 "tls_version": 0, 00:18:41.210 "enable_ktls": false 00:18:41.210 } 00:18:41.210 } 00:18:41.210 ] 00:18:41.210 }, 00:18:41.210 { 00:18:41.210 "subsystem": "vmd", 00:18:41.210 "config": [] 00:18:41.210 }, 00:18:41.210 { 00:18:41.210 "subsystem": "accel", 00:18:41.210 "config": [ 00:18:41.210 { 00:18:41.210 "method": "accel_set_options", 00:18:41.210 "params": { 00:18:41.210 "small_cache_size": 128, 00:18:41.210 "large_cache_size": 16, 00:18:41.210 "task_count": 2048, 00:18:41.210 "sequence_count": 2048, 00:18:41.210 "buf_count": 2048 00:18:41.210 } 00:18:41.210 } 00:18:41.210 ] 00:18:41.210 }, 00:18:41.210 { 00:18:41.210 "subsystem": "bdev", 00:18:41.210 "config": [ 00:18:41.210 { 00:18:41.210 "method": "bdev_set_options", 00:18:41.210 "params": { 00:18:41.210 "bdev_io_pool_size": 65535, 00:18:41.210 "bdev_io_cache_size": 256, 00:18:41.210 "bdev_auto_examine": true, 00:18:41.210 "iobuf_small_cache_size": 128, 00:18:41.210 "iobuf_large_cache_size": 16 00:18:41.210 } 00:18:41.210 }, 00:18:41.210 { 00:18:41.210 "method": "bdev_raid_set_options", 00:18:41.210 "params": { 00:18:41.210 "process_window_size_kb": 1024, 00:18:41.210 "process_max_bandwidth_mb_sec": 0 00:18:41.210 } 00:18:41.210 }, 00:18:41.210 { 00:18:41.210 "method": "bdev_iscsi_set_options", 00:18:41.210 "params": { 00:18:41.210 "timeout_sec": 30 00:18:41.210 } 00:18:41.210 }, 00:18:41.210 { 00:18:41.210 "method": "bdev_nvme_set_options", 00:18:41.210 "params": { 00:18:41.210 "action_on_timeout": "none", 00:18:41.210 "timeout_us": 0, 00:18:41.210 "timeout_admin_us": 0, 00:18:41.210 "keep_alive_timeout_ms": 10000, 00:18:41.210 "arbitration_burst": 0, 00:18:41.210 "low_priority_weight": 0, 00:18:41.210 "medium_priority_weight": 0, 00:18:41.210 "high_priority_weight": 0, 00:18:41.210 "nvme_adminq_poll_period_us": 10000, 00:18:41.210 "nvme_ioq_poll_period_us": 0, 00:18:41.210 "io_queue_requests": 0, 00:18:41.210 "delay_cmd_submit": true, 00:18:41.210 "transport_retry_count": 4, 00:18:41.210 "bdev_retry_count": 3, 00:18:41.210 "transport_ack_timeout": 0, 00:18:41.210 "ctrlr_loss_timeout_sec": 0, 00:18:41.210 "reconnect_delay_sec": 0, 00:18:41.210 "fast_io_fail_timeout_sec": 0, 00:18:41.210 "disable_auto_failback": false, 00:18:41.210 "generate_uuids": false, 00:18:41.210 "transport_tos": 0, 00:18:41.210 "nvme_error_stat": false, 00:18:41.210 "rdma_srq_size": 0, 00:18:41.210 "io_path_stat": false, 00:18:41.210 "allow_accel_sequence": false, 00:18:41.210 "rdma_max_cq_size": 0, 00:18:41.210 "rdma_cm_event_timeout_ms": 0, 00:18:41.210 "dhchap_digests": [ 00:18:41.210 "sha256", 00:18:41.210 "sha384", 00:18:41.210 "sha512" 00:18:41.210 ], 00:18:41.210 "dhchap_dhgroups": [ 00:18:41.210 "null", 00:18:41.210 "ffdhe2048", 00:18:41.210 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@725 -- # xtrace_disable 00:18:41.210 "ffdhe3072", 00:18:41.210 "ffdhe4096", 00:18:41.210 "ffdhe6144", 00:18:41.210 "ffdhe8192" 00:18:41.210 ] 00:18:41.210 } 00:18:41.210 }, 00:18:41.210 { 00:18:41.210 "method": "bdev_nvme_set_hotplug", 00:18:41.210 "params": { 00:18:41.210 "period_us": 100000, 00:18:41.210 "enable": false 00:18:41.210 } 00:18:41.210 }, 00:18:41.210 { 00:18:41.210 "method": "bdev_malloc_create", 00:18:41.210 "params": { 00:18:41.210 "name": "malloc0", 00:18:41.210 "num_blocks": 8192, 00:18:41.210 "block_size": 4096, 00:18:41.210 "physical_block_size": 4096, 00:18:41.210 "uuid": "c695c10e-94b0-4cd4-8be7-42eadb75646d", 00:18:41.210 "optimal_io_boundary": 0, 00:18:41.210 "md_size": 0, 00:18:41.210 "dif_type": 0, 00:18:41.210 "dif_is_head_of_md": false, 00:18:41.210 "dif_pi_format": 0 00:18:41.210 } 00:18:41.210 }, 00:18:41.210 { 00:18:41.210 "method": "bdev_wait_for_examine" 00:18:41.210 } 00:18:41.210 ] 00:18:41.210 }, 00:18:41.210 { 00:18:41.210 "subsystem": "nbd", 00:18:41.210 "config": [] 00:18:41.210 }, 00:18:41.210 { 00:18:41.210 "subsystem": "scheduler", 00:18:41.210 "config": [ 00:18:41.210 { 00:18:41.210 "method": "framework_set_scheduler", 00:18:41.210 "params": { 00:18:41.210 "name": "static" 00:18:41.210 } 00:18:41.210 } 00:18:41.210 ] 00:18:41.210 }, 00:18:41.210 { 00:18:41.210 "subsystem": "nvmf", 00:18:41.210 "config": [ 00:18:41.210 { 00:18:41.210 "method": "nvmf_set_config", 00:18:41.210 "params": { 00:18:41.210 "discovery_filter": "match_any", 00:18:41.210 "admin_cmd_passthru": { 00:18:41.210 "identify_ctrlr": false 00:18:41.210 } 00:18:41.210 } 00:18:41.210 }, 00:18:41.210 { 00:18:41.210 "method": "nvmf_set_max_subsystems", 00:18:41.210 "params": { 00:18:41.210 "max_subsystems": 1024 00:18:41.210 } 00:18:41.210 }, 00:18:41.210 { 00:18:41.210 "method": "nvmf_set_crdt", 00:18:41.211 "params": { 00:18:41.211 "crdt1": 0, 00:18:41.211 "crdt2": 0, 00:18:41.211 "crdt3": 0 00:18:41.211 } 00:18:41.211 }, 00:18:41.211 { 00:18:41.211 "method": "nvmf_create_transport", 00:18:41.211 "params": { 00:18:41.211 "trtype": "TCP", 00:18:41.211 "max_queue_depth": 128, 00:18:41.211 "max_io_qpairs_per_ctrlr": 127, 00:18:41.211 "in_capsule_data_size": 4096, 00:18:41.211 "max_io_size": 131072, 00:18:41.211 "io_unit_size": 131072, 00:18:41.211 "max_aq_depth": 128, 00:18:41.211 "num_shared_buffers": 511, 00:18:41.211 "buf_cache_size": 4294967295, 00:18:41.211 "dif_insert_or_strip": false, 00:18:41.211 "zcopy": false, 00:18:41.211 "c2h_success": false, 00:18:41.211 "sock_priority": 0, 00:18:41.211 "abort_timeout_sec": 1, 00:18:41.211 "ack_timeout": 0, 00:18:41.211 "data_wr_pool_size": 0 00:18:41.211 } 00:18:41.211 }, 00:18:41.211 { 00:18:41.211 "method": "nvmf_create_subsystem", 00:18:41.211 "params": { 00:18:41.211 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.211 "allow_any_host": false, 00:18:41.211 "serial_number": "SPDK00000000000001", 00:18:41.211 "model_number": "SPDK bdev Controller", 00:18:41.211 "max_namespaces": 10, 00:18:41.211 "min_cntlid": 1, 00:18:41.211 "max_cntlid": 65519, 00:18:41.211 "ana_reporting": false 00:18:41.211 } 00:18:41.211 }, 00:18:41.211 { 00:18:41.211 "method": "nvmf_subsystem_add_host", 00:18:41.211 "params": { 00:18:41.211 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.211 "host": "nqn.2016-06.io.spdk:host1", 00:18:41.211 "psk": "/tmp/tmp.lzsIFDHYc9" 00:18:41.211 } 00:18:41.211 }, 00:18:41.211 { 00:18:41.211 "method": "nvmf_subsystem_add_ns", 00:18:41.211 "params": { 00:18:41.211 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.211 "namespace": { 00:18:41.211 "nsid": 1, 00:18:41.211 "bdev_name": "malloc0", 00:18:41.211 "nguid": "C695C10E94B04CD48BE742EADB75646D", 00:18:41.211 "uuid": "c695c10e-94b0-4cd4-8be7-42eadb75646d", 00:18:41.211 "no_auto_visible": false 00:18:41.211 } 00:18:41.211 } 00:18:41.211 }, 00:18:41.211 { 00:18:41.211 "method": "nvmf_subsystem_add_listener", 00:18:41.211 "params": { 00:18:41.211 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:41.211 "listen_address": { 00:18:41.211 "trtype": "TCP", 00:18:41.211 "adrfam": "IPv4", 00:18:41.211 "traddr": "10.0.0.2", 00:18:41.211 "trsvcid": "4420" 00:18:41.211 }, 00:18:41.211 "secure_channel": true 00:18:41.211 } 00:18:41.211 } 00:18:41.211 ] 00:18:41.211 } 00:18:41.211 ] 00:18:41.211 }' 00:18:41.211 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.211 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=1198929 00:18:41.211 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:41.211 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 1198929 00:18:41.211 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1198929 ']' 00:18:41.211 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.211 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:41.211 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.211 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:41.211 19:47:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.211 [2024-07-24 19:47:58.435007] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:41.211 [2024-07-24 19:47:58.435102] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:41.211 EAL: No free 2048 kB hugepages reported on node 1 00:18:41.211 [2024-07-24 19:47:58.504866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.469 [2024-07-24 19:47:58.619373] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:41.469 [2024-07-24 19:47:58.619435] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:41.469 [2024-07-24 19:47:58.619464] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:41.469 [2024-07-24 19:47:58.619477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:41.469 [2024-07-24 19:47:58.619489] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:41.469 [2024-07-24 19:47:58.619578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.730 [2024-07-24 19:47:58.849630] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.730 [2024-07-24 19:47:58.877963] tcp.c:3812:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:41.730 [2024-07-24 19:47:58.894035] tcp.c:1030:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:41.730 [2024-07-24 19:47:58.894296] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.988 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:41.988 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:18:41.988 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:18:41.988 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@731 -- # xtrace_disable 00:18:41.988 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.247 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.247 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1198980 00:18:42.247 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1198980 /var/tmp/bdevperf.sock 00:18:42.247 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1198980 ']' 00:18:42.247 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.247 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:42.247 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:42.247 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.247 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:42.247 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:42.247 "subsystems": [ 00:18:42.247 { 00:18:42.247 "subsystem": "keyring", 00:18:42.247 "config": [] 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "subsystem": "iobuf", 00:18:42.247 "config": [ 00:18:42.247 { 00:18:42.247 "method": "iobuf_set_options", 00:18:42.247 "params": { 00:18:42.247 "small_pool_count": 8192, 00:18:42.247 "large_pool_count": 1024, 00:18:42.247 "small_bufsize": 8192, 00:18:42.247 "large_bufsize": 135168 00:18:42.247 } 00:18:42.247 } 00:18:42.247 ] 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "subsystem": "sock", 00:18:42.247 "config": [ 00:18:42.247 { 00:18:42.247 "method": "sock_set_default_impl", 00:18:42.247 "params": { 00:18:42.247 "impl_name": "posix" 00:18:42.247 } 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "method": "sock_impl_set_options", 00:18:42.247 "params": { 00:18:42.247 "impl_name": "ssl", 00:18:42.247 "recv_buf_size": 4096, 00:18:42.247 "send_buf_size": 4096, 00:18:42.247 "enable_recv_pipe": true, 00:18:42.247 "enable_quickack": false, 00:18:42.247 "enable_placement_id": 0, 00:18:42.247 "enable_zerocopy_send_server": true, 00:18:42.247 "enable_zerocopy_send_client": false, 00:18:42.247 "zerocopy_threshold": 0, 00:18:42.247 "tls_version": 0, 00:18:42.247 "enable_ktls": false 00:18:42.247 } 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "method": "sock_impl_set_options", 00:18:42.247 "params": { 00:18:42.247 "impl_name": "posix", 00:18:42.247 "recv_buf_size": 2097152, 00:18:42.247 "send_buf_size": 2097152, 00:18:42.247 "enable_recv_pipe": true, 00:18:42.247 "enable_quickack": false, 00:18:42.247 "enable_placement_id": 0, 00:18:42.247 "enable_zerocopy_send_server": true, 00:18:42.247 "enable_zerocopy_send_client": false, 00:18:42.247 "zerocopy_threshold": 0, 00:18:42.247 "tls_version": 0, 00:18:42.247 "enable_ktls": false 00:18:42.247 } 00:18:42.247 } 00:18:42.247 ] 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "subsystem": "vmd", 00:18:42.247 "config": [] 00:18:42.247 }, 00:18:42.247 { 00:18:42.247 "subsystem": "accel", 00:18:42.247 "config": [ 00:18:42.247 { 00:18:42.247 "method": "accel_set_options", 00:18:42.248 "params": { 00:18:42.248 "small_cache_size": 128, 00:18:42.248 "large_cache_size": 16, 00:18:42.248 "task_count": 2048, 00:18:42.248 "sequence_count": 2048, 00:18:42.248 "buf_count": 2048 00:18:42.248 } 00:18:42.248 } 00:18:42.248 ] 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "subsystem": "bdev", 00:18:42.248 "config": [ 00:18:42.248 { 00:18:42.248 "method": "bdev_set_options", 00:18:42.248 "params": { 00:18:42.248 "bdev_io_pool_size": 65535, 00:18:42.248 "bdev_io_cache_size": 256, 00:18:42.248 "bdev_auto_examine": true, 00:18:42.248 "iobuf_small_cache_size": 128, 00:18:42.248 "iobuf_large_cache_size": 16 00:18:42.248 } 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "method": "bdev_raid_set_options", 00:18:42.248 "params": { 00:18:42.248 "process_window_size_kb": 1024, 00:18:42.248 "process_max_bandwidth_mb_sec": 0 00:18:42.248 } 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "method": "bdev_iscsi_set_options", 00:18:42.248 "params": { 00:18:42.248 "timeout_sec": 30 00:18:42.248 } 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "method": "bdev_nvme_set_options", 00:18:42.248 "params": { 00:18:42.248 "action_on_timeout": "none", 00:18:42.248 "timeout_us": 0, 00:18:42.248 "timeout_admin_us": 0, 00:18:42.248 "keep_alive_timeout_ms": 10000, 00:18:42.248 "arbitration_burst": 0, 00:18:42.248 "low_priority_weight": 0, 00:18:42.248 "medium_priority_weight": 0, 00:18:42.248 "high_priority_weight": 0, 00:18:42.248 "nvme_adminq_poll_period_us": 10000, 00:18:42.248 "nvme_ioq_poll_period_us": 0, 00:18:42.248 "io_queue_requests": 512, 00:18:42.248 "delay_cmd_submit": true, 00:18:42.248 "transport_retry_count": 4, 00:18:42.248 "bdev_retry_count": 3, 00:18:42.248 "transport_ack_timeout": 0, 00:18:42.248 "ctrlr_loss_timeout_sec": 0, 00:18:42.248 "reconnect_delay_sec": 0, 00:18:42.248 "fast_io_fail_timeout_sec": 0, 00:18:42.248 "disable_auto_failback": false, 00:18:42.248 "generate_uuids": false, 00:18:42.248 "transport_tos": 0, 00:18:42.248 "nvme_error_stat": false, 00:18:42.248 "rdma_srq_size": 0, 00:18:42.248 "io_path_stat": false, 00:18:42.248 "allow_accel_sequence": false, 00:18:42.248 "rdma_max_cq_size": 0, 00:18:42.248 "rdma_cm_event_timeout_ms": 0, 00:18:42.248 "dhchap_digests": [ 00:18:42.248 "sha256", 00:18:42.248 "sha384", 00:18:42.248 "sha512" 00:18:42.248 ], 00:18:42.248 "dhchap_dhgroups": [ 00:18:42.248 "null", 00:18:42.248 "ffdhe2048", 00:18:42.248 19:47:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:42.248 "ffdhe3072", 00:18:42.248 "ffdhe4096", 00:18:42.248 "ffdhe6144", 00:18:42.248 "ffdhe8192" 00:18:42.248 ] 00:18:42.248 } 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "method": "bdev_nvme_attach_controller", 00:18:42.248 "params": { 00:18:42.248 "name": "TLSTEST", 00:18:42.248 "trtype": "TCP", 00:18:42.248 "adrfam": "IPv4", 00:18:42.248 "traddr": "10.0.0.2", 00:18:42.248 "trsvcid": "4420", 00:18:42.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:42.248 "prchk_reftag": false, 00:18:42.248 "prchk_guard": false, 00:18:42.248 "ctrlr_loss_timeout_sec": 0, 00:18:42.248 "reconnect_delay_sec": 0, 00:18:42.248 "fast_io_fail_timeout_sec": 0, 00:18:42.248 "psk": "/tmp/tmp.lzsIFDHYc9", 00:18:42.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:42.248 "hdgst": false, 00:18:42.248 "ddgst": false 00:18:42.248 } 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "method": "bdev_nvme_set_hotplug", 00:18:42.248 "params": { 00:18:42.248 "period_us": 100000, 00:18:42.248 "enable": false 00:18:42.248 } 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "method": "bdev_wait_for_examine" 00:18:42.248 } 00:18:42.248 ] 00:18:42.248 }, 00:18:42.248 { 00:18:42.248 "subsystem": "nbd", 00:18:42.248 "config": [] 00:18:42.248 } 00:18:42.248 ] 00:18:42.248 }' 00:18:42.248 [2024-07-24 19:47:59.427844] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:42.248 [2024-07-24 19:47:59.427932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198980 ] 00:18:42.248 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.248 [2024-07-24 19:47:59.489513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.248 [2024-07-24 19:47:59.594259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.508 [2024-07-24 19:47:59.764106] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.508 [2024-07-24 19:47:59.764262] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:43.076 19:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:43.076 19:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:18:43.076 19:48:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:43.336 Running I/O for 10 seconds... 00:18:53.370 00:18:53.370 Latency(us) 00:18:53.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.370 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:53.370 Verification LBA range: start 0x0 length 0x2000 00:18:53.370 TLSTESTn1 : 10.02 3464.18 13.53 0.00 0.00 36884.67 6359.42 42525.58 00:18:53.370 =================================================================================================================== 00:18:53.370 Total : 3464.18 13.53 0.00 0.00 36884.67 6359.42 42525.58 00:18:53.370 0 00:18:53.370 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:53.370 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1198980 00:18:53.370 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1198980 ']' 00:18:53.370 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1198980 00:18:53.370 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:53.370 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:53.370 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1198980 00:18:53.370 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_2 00:18:53.370 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_2 = sudo ']' 00:18:53.370 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1198980' 00:18:53.370 killing process with pid 1198980 00:18:53.370 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1198980 00:18:53.370 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.370 00:18:53.370 Latency(us) 00:18:53.370 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.370 =================================================================================================================== 00:18:53.370 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.370 [2024-07-24 19:48:10.587383] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:53.370 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1198980 00:18:53.630 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1198929 00:18:53.630 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1198929 ']' 00:18:53.630 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1198929 00:18:53.630 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:53.630 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:53.630 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1198929 00:18:53.630 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:18:53.630 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:18:53.630 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1198929' 00:18:53.630 killing process with pid 1198929 00:18:53.630 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1198929 00:18:53.630 [2024-07-24 19:48:10.878386] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:53.630 19:48:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1198929 00:18:53.889 19:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:53.889 19:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:18:53.889 19:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@725 -- # xtrace_disable 00:18:53.889 19:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.889 19:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=1201034 00:18:53.889 19:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:53.889 19:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 1201034 00:18:53.889 19:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1201034 ']' 00:18:53.889 19:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:53.889 19:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:53.889 19:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:53.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:53.889 19:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:53.889 19:48:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.889 [2024-07-24 19:48:11.224823] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:53.889 [2024-07-24 19:48:11.224924] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:53.889 EAL: No free 2048 kB hugepages reported on node 1 00:18:54.149 [2024-07-24 19:48:11.293220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.149 [2024-07-24 19:48:11.405406] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:54.149 [2024-07-24 19:48:11.405468] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:54.149 [2024-07-24 19:48:11.405495] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:54.149 [2024-07-24 19:48:11.405508] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:54.149 [2024-07-24 19:48:11.405519] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:54.149 [2024-07-24 19:48:11.405549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.087 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:55.087 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:18:55.087 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:18:55.087 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@731 -- # xtrace_disable 00:18:55.087 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.087 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.087 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.lzsIFDHYc9 00:18:55.087 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lzsIFDHYc9 00:18:55.087 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:55.087 [2024-07-24 19:48:12.452966] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.346 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:55.346 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:55.604 [2024-07-24 19:48:12.938251] tcp.c:1030:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:55.604 [2024-07-24 19:48:12.938513] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.604 19:48:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:55.862 malloc0 00:18:55.862 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:56.128 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lzsIFDHYc9 00:18:56.387 [2024-07-24 19:48:13.687865] tcp.c:3812:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:56.387 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1201330 00:18:56.387 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:56.387 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:56.387 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1201330 /var/tmp/bdevperf.sock 00:18:56.387 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1201330 ']' 00:18:56.387 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.387 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:56.387 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.387 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:56.387 19:48:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.387 [2024-07-24 19:48:13.752117] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:56.387 [2024-07-24 19:48:13.752195] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201330 ] 00:18:56.647 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.647 [2024-07-24 19:48:13.811978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.647 [2024-07-24 19:48:13.919491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.906 19:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:18:56.906 19:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:18:56.906 19:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lzsIFDHYc9 00:18:57.166 19:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:57.166 [2024-07-24 19:48:14.529550] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:57.426 nvme0n1 00:18:57.426 19:48:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:57.426 Running I/O for 1 seconds... 00:18:58.850 00:18:58.850 Latency(us) 00:18:58.850 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.850 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:58.850 Verification LBA range: start 0x0 length 0x2000 00:18:58.850 nvme0n1 : 1.03 3357.10 13.11 0.00 0.00 37689.95 8009.96 38641.97 00:18:58.850 =================================================================================================================== 00:18:58.850 Total : 3357.10 13.11 0.00 0.00 37689.95 8009.96 38641.97 00:18:58.850 0 00:18:58.850 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1201330 00:18:58.850 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1201330 ']' 00:18:58.850 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1201330 00:18:58.851 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:58.851 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:58.851 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1201330 00:18:58.851 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:18:58.851 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:18:58.851 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1201330' 00:18:58.851 killing process with pid 1201330 00:18:58.851 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1201330 00:18:58.851 Received shutdown signal, test time was about 1.000000 seconds 00:18:58.851 00:18:58.851 Latency(us) 00:18:58.851 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.851 =================================================================================================================== 00:18:58.851 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.851 19:48:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1201330 00:18:58.851 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1201034 00:18:58.851 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1201034 ']' 00:18:58.851 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1201034 00:18:58.851 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:18:58.851 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:18:58.851 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1201034 00:18:58.851 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:18:58.851 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:18:58.851 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1201034' 00:18:58.851 killing process with pid 1201034 00:18:58.851 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1201034 00:18:58.851 [2024-07-24 19:48:16.103396] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:58.851 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1201034 00:18:59.110 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:59.110 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:18:59.110 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@725 -- # xtrace_disable 00:18:59.110 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.110 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=1201649 00:18:59.110 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:59.110 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 1201649 00:18:59.110 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1201649 ']' 00:18:59.110 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.110 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:18:59.110 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.110 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:18:59.110 19:48:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.110 [2024-07-24 19:48:16.436773] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:18:59.110 [2024-07-24 19:48:16.436880] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.110 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.370 [2024-07-24 19:48:16.504749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.370 [2024-07-24 19:48:16.620884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.370 [2024-07-24 19:48:16.620949] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.370 [2024-07-24 19:48:16.620974] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.370 [2024-07-24 19:48:16.620986] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.370 [2024-07-24 19:48:16.620998] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.370 [2024-07-24 19:48:16.621033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@731 -- # xtrace_disable 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.308 [2024-07-24 19:48:17.454771] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:00.308 malloc0 00:19:00.308 [2024-07-24 19:48:17.486304] tcp.c:1030:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:00.308 [2024-07-24 19:48:17.498462] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1201796 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1201796 /var/tmp/bdevperf.sock 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1201796 ']' 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:00.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:19:00.308 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:00.308 [2024-07-24 19:48:17.573568] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:19:00.308 [2024-07-24 19:48:17.573663] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201796 ] 00:19:00.308 EAL: No free 2048 kB hugepages reported on node 1 00:19:00.308 [2024-07-24 19:48:17.639272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:00.568 [2024-07-24 19:48:17.754815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.568 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:19:00.568 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:19:00.568 19:48:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lzsIFDHYc9 00:19:00.826 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:01.084 [2024-07-24 19:48:18.330441] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:01.084 nvme0n1 00:19:01.084 19:48:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:01.343 Running I/O for 1 seconds... 00:19:02.278 00:19:02.278 Latency(us) 00:19:02.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.278 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:02.278 Verification LBA range: start 0x0 length 0x2000 00:19:02.278 nvme0n1 : 1.02 3390.80 13.25 0.00 0.00 37346.70 6602.15 40001.23 00:19:02.278 =================================================================================================================== 00:19:02.278 Total : 3390.80 13.25 0.00 0.00 37346.70 6602.15 40001.23 00:19:02.278 0 00:19:02.278 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:02.278 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:02.278 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.537 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:02.537 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:02.537 "subsystems": [ 00:19:02.537 { 00:19:02.537 "subsystem": "keyring", 00:19:02.537 "config": [ 00:19:02.537 { 00:19:02.537 "method": "keyring_file_add_key", 00:19:02.537 "params": { 00:19:02.537 "name": "key0", 00:19:02.537 "path": "/tmp/tmp.lzsIFDHYc9" 00:19:02.537 } 00:19:02.537 } 00:19:02.537 ] 00:19:02.537 }, 00:19:02.537 { 00:19:02.537 "subsystem": "iobuf", 00:19:02.537 "config": [ 00:19:02.537 { 00:19:02.537 "method": "iobuf_set_options", 00:19:02.537 "params": { 00:19:02.537 "small_pool_count": 8192, 00:19:02.537 "large_pool_count": 1024, 00:19:02.537 "small_bufsize": 8192, 00:19:02.537 "large_bufsize": 135168 00:19:02.537 } 00:19:02.537 } 00:19:02.537 ] 00:19:02.537 }, 00:19:02.537 { 00:19:02.537 "subsystem": "sock", 00:19:02.537 "config": [ 00:19:02.537 { 00:19:02.537 "method": "sock_set_default_impl", 00:19:02.537 "params": { 00:19:02.537 "impl_name": "posix" 00:19:02.537 } 00:19:02.537 }, 00:19:02.537 { 00:19:02.537 "method": "sock_impl_set_options", 00:19:02.537 "params": { 00:19:02.537 "impl_name": "ssl", 00:19:02.537 "recv_buf_size": 4096, 00:19:02.537 "send_buf_size": 4096, 00:19:02.537 "enable_recv_pipe": true, 00:19:02.537 "enable_quickack": false, 00:19:02.537 "enable_placement_id": 0, 00:19:02.537 "enable_zerocopy_send_server": true, 00:19:02.537 "enable_zerocopy_send_client": false, 00:19:02.537 "zerocopy_threshold": 0, 00:19:02.537 "tls_version": 0, 00:19:02.537 "enable_ktls": false 00:19:02.537 } 00:19:02.537 }, 00:19:02.537 { 00:19:02.537 "method": "sock_impl_set_options", 00:19:02.537 "params": { 00:19:02.537 "impl_name": "posix", 00:19:02.537 "recv_buf_size": 2097152, 00:19:02.537 "send_buf_size": 2097152, 00:19:02.537 "enable_recv_pipe": true, 00:19:02.537 "enable_quickack": false, 00:19:02.537 "enable_placement_id": 0, 00:19:02.537 "enable_zerocopy_send_server": true, 00:19:02.537 "enable_zerocopy_send_client": false, 00:19:02.537 "zerocopy_threshold": 0, 00:19:02.537 "tls_version": 0, 00:19:02.537 "enable_ktls": false 00:19:02.537 } 00:19:02.537 } 00:19:02.537 ] 00:19:02.537 }, 00:19:02.537 { 00:19:02.537 "subsystem": "vmd", 00:19:02.537 "config": [] 00:19:02.537 }, 00:19:02.537 { 00:19:02.537 "subsystem": "accel", 00:19:02.537 "config": [ 00:19:02.537 { 00:19:02.537 "method": "accel_set_options", 00:19:02.537 "params": { 00:19:02.537 "small_cache_size": 128, 00:19:02.537 "large_cache_size": 16, 00:19:02.537 "task_count": 2048, 00:19:02.537 "sequence_count": 2048, 00:19:02.537 "buf_count": 2048 00:19:02.537 } 00:19:02.537 } 00:19:02.537 ] 00:19:02.537 }, 00:19:02.537 { 00:19:02.537 "subsystem": "bdev", 00:19:02.537 "config": [ 00:19:02.537 { 00:19:02.537 "method": "bdev_set_options", 00:19:02.538 "params": { 00:19:02.538 "bdev_io_pool_size": 65535, 00:19:02.538 "bdev_io_cache_size": 256, 00:19:02.538 "bdev_auto_examine": true, 00:19:02.538 "iobuf_small_cache_size": 128, 00:19:02.538 "iobuf_large_cache_size": 16 00:19:02.538 } 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "method": "bdev_raid_set_options", 00:19:02.538 "params": { 00:19:02.538 "process_window_size_kb": 1024, 00:19:02.538 "process_max_bandwidth_mb_sec": 0 00:19:02.538 } 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "method": "bdev_iscsi_set_options", 00:19:02.538 "params": { 00:19:02.538 "timeout_sec": 30 00:19:02.538 } 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "method": "bdev_nvme_set_options", 00:19:02.538 "params": { 00:19:02.538 "action_on_timeout": "none", 00:19:02.538 "timeout_us": 0, 00:19:02.538 "timeout_admin_us": 0, 00:19:02.538 "keep_alive_timeout_ms": 10000, 00:19:02.538 "arbitration_burst": 0, 00:19:02.538 "low_priority_weight": 0, 00:19:02.538 "medium_priority_weight": 0, 00:19:02.538 "high_priority_weight": 0, 00:19:02.538 "nvme_adminq_poll_period_us": 10000, 00:19:02.538 "nvme_ioq_poll_period_us": 0, 00:19:02.538 "io_queue_requests": 0, 00:19:02.538 "delay_cmd_submit": true, 00:19:02.538 "transport_retry_count": 4, 00:19:02.538 "bdev_retry_count": 3, 00:19:02.538 "transport_ack_timeout": 0, 00:19:02.538 "ctrlr_loss_timeout_sec": 0, 00:19:02.538 "reconnect_delay_sec": 0, 00:19:02.538 "fast_io_fail_timeout_sec": 0, 00:19:02.538 "disable_auto_failback": false, 00:19:02.538 "generate_uuids": false, 00:19:02.538 "transport_tos": 0, 00:19:02.538 "nvme_error_stat": false, 00:19:02.538 "rdma_srq_size": 0, 00:19:02.538 "io_path_stat": false, 00:19:02.538 "allow_accel_sequence": false, 00:19:02.538 "rdma_max_cq_size": 0, 00:19:02.538 "rdma_cm_event_timeout_ms": 0, 00:19:02.538 "dhchap_digests": [ 00:19:02.538 "sha256", 00:19:02.538 "sha384", 00:19:02.538 "sha512" 00:19:02.538 ], 00:19:02.538 "dhchap_dhgroups": [ 00:19:02.538 "null", 00:19:02.538 "ffdhe2048", 00:19:02.538 "ffdhe3072", 00:19:02.538 "ffdhe4096", 00:19:02.538 "ffdhe6144", 00:19:02.538 "ffdhe8192" 00:19:02.538 ] 00:19:02.538 } 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "method": "bdev_nvme_set_hotplug", 00:19:02.538 "params": { 00:19:02.538 "period_us": 100000, 00:19:02.538 "enable": false 00:19:02.538 } 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "method": "bdev_malloc_create", 00:19:02.538 "params": { 00:19:02.538 "name": "malloc0", 00:19:02.538 "num_blocks": 8192, 00:19:02.538 "block_size": 4096, 00:19:02.538 "physical_block_size": 4096, 00:19:02.538 "uuid": "c91c8918-46fc-46d9-9f2f-4f2eca6fe48f", 00:19:02.538 "optimal_io_boundary": 0, 00:19:02.538 "md_size": 0, 00:19:02.538 "dif_type": 0, 00:19:02.538 "dif_is_head_of_md": false, 00:19:02.538 "dif_pi_format": 0 00:19:02.538 } 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "method": "bdev_wait_for_examine" 00:19:02.538 } 00:19:02.538 ] 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "subsystem": "nbd", 00:19:02.538 "config": [] 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "subsystem": "scheduler", 00:19:02.538 "config": [ 00:19:02.538 { 00:19:02.538 "method": "framework_set_scheduler", 00:19:02.538 "params": { 00:19:02.538 "name": "static" 00:19:02.538 } 00:19:02.538 } 00:19:02.538 ] 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "subsystem": "nvmf", 00:19:02.538 "config": [ 00:19:02.538 { 00:19:02.538 "method": "nvmf_set_config", 00:19:02.538 "params": { 00:19:02.538 "discovery_filter": "match_any", 00:19:02.538 "admin_cmd_passthru": { 00:19:02.538 "identify_ctrlr": false 00:19:02.538 } 00:19:02.538 } 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "method": "nvmf_set_max_subsystems", 00:19:02.538 "params": { 00:19:02.538 "max_subsystems": 1024 00:19:02.538 } 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "method": "nvmf_set_crdt", 00:19:02.538 "params": { 00:19:02.538 "crdt1": 0, 00:19:02.538 "crdt2": 0, 00:19:02.538 "crdt3": 0 00:19:02.538 } 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "method": "nvmf_create_transport", 00:19:02.538 "params": { 00:19:02.538 "trtype": "TCP", 00:19:02.538 "max_queue_depth": 128, 00:19:02.538 "max_io_qpairs_per_ctrlr": 127, 00:19:02.538 "in_capsule_data_size": 4096, 00:19:02.538 "max_io_size": 131072, 00:19:02.538 "io_unit_size": 131072, 00:19:02.538 "max_aq_depth": 128, 00:19:02.538 "num_shared_buffers": 511, 00:19:02.538 "buf_cache_size": 4294967295, 00:19:02.538 "dif_insert_or_strip": false, 00:19:02.538 "zcopy": false, 00:19:02.538 "c2h_success": false, 00:19:02.538 "sock_priority": 0, 00:19:02.538 "abort_timeout_sec": 1, 00:19:02.538 "ack_timeout": 0, 00:19:02.538 "data_wr_pool_size": 0 00:19:02.538 } 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "method": "nvmf_create_subsystem", 00:19:02.538 "params": { 00:19:02.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.538 "allow_any_host": false, 00:19:02.538 "serial_number": "00000000000000000000", 00:19:02.538 "model_number": "SPDK bdev Controller", 00:19:02.538 "max_namespaces": 32, 00:19:02.538 "min_cntlid": 1, 00:19:02.538 "max_cntlid": 65519, 00:19:02.538 "ana_reporting": false 00:19:02.538 } 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "method": "nvmf_subsystem_add_host", 00:19:02.538 "params": { 00:19:02.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.538 "host": "nqn.2016-06.io.spdk:host1", 00:19:02.538 "psk": "key0" 00:19:02.538 } 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "method": "nvmf_subsystem_add_ns", 00:19:02.538 "params": { 00:19:02.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.538 "namespace": { 00:19:02.538 "nsid": 1, 00:19:02.538 "bdev_name": "malloc0", 00:19:02.538 "nguid": "C91C891846FC46D99F2F4F2ECA6FE48F", 00:19:02.538 "uuid": "c91c8918-46fc-46d9-9f2f-4f2eca6fe48f", 00:19:02.538 "no_auto_visible": false 00:19:02.538 } 00:19:02.538 } 00:19:02.538 }, 00:19:02.538 { 00:19:02.538 "method": "nvmf_subsystem_add_listener", 00:19:02.538 "params": { 00:19:02.538 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.538 "listen_address": { 00:19:02.538 "trtype": "TCP", 00:19:02.538 "adrfam": "IPv4", 00:19:02.538 "traddr": "10.0.0.2", 00:19:02.538 "trsvcid": "4420" 00:19:02.538 }, 00:19:02.538 "secure_channel": false, 00:19:02.538 "sock_impl": "ssl" 00:19:02.538 } 00:19:02.538 } 00:19:02.538 ] 00:19:02.538 } 00:19:02.538 ] 00:19:02.538 }' 00:19:02.538 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:02.799 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:02.799 "subsystems": [ 00:19:02.799 { 00:19:02.799 "subsystem": "keyring", 00:19:02.799 "config": [ 00:19:02.799 { 00:19:02.799 "method": "keyring_file_add_key", 00:19:02.799 "params": { 00:19:02.800 "name": "key0", 00:19:02.800 "path": "/tmp/tmp.lzsIFDHYc9" 00:19:02.800 } 00:19:02.800 } 00:19:02.800 ] 00:19:02.800 }, 00:19:02.800 { 00:19:02.800 "subsystem": "iobuf", 00:19:02.800 "config": [ 00:19:02.800 { 00:19:02.800 "method": "iobuf_set_options", 00:19:02.800 "params": { 00:19:02.800 "small_pool_count": 8192, 00:19:02.800 "large_pool_count": 1024, 00:19:02.800 "small_bufsize": 8192, 00:19:02.800 "large_bufsize": 135168 00:19:02.800 } 00:19:02.800 } 00:19:02.800 ] 00:19:02.800 }, 00:19:02.800 { 00:19:02.800 "subsystem": "sock", 00:19:02.800 "config": [ 00:19:02.800 { 00:19:02.800 "method": "sock_set_default_impl", 00:19:02.800 "params": { 00:19:02.800 "impl_name": "posix" 00:19:02.800 } 00:19:02.800 }, 00:19:02.800 { 00:19:02.800 "method": "sock_impl_set_options", 00:19:02.800 "params": { 00:19:02.800 "impl_name": "ssl", 00:19:02.800 "recv_buf_size": 4096, 00:19:02.800 "send_buf_size": 4096, 00:19:02.800 "enable_recv_pipe": true, 00:19:02.800 "enable_quickack": false, 00:19:02.800 "enable_placement_id": 0, 00:19:02.800 "enable_zerocopy_send_server": true, 00:19:02.800 "enable_zerocopy_send_client": false, 00:19:02.800 "zerocopy_threshold": 0, 00:19:02.800 "tls_version": 0, 00:19:02.800 "enable_ktls": false 00:19:02.800 } 00:19:02.800 }, 00:19:02.800 { 00:19:02.800 "method": "sock_impl_set_options", 00:19:02.800 "params": { 00:19:02.800 "impl_name": "posix", 00:19:02.800 "recv_buf_size": 2097152, 00:19:02.800 "send_buf_size": 2097152, 00:19:02.800 "enable_recv_pipe": true, 00:19:02.800 "enable_quickack": false, 00:19:02.800 "enable_placement_id": 0, 00:19:02.800 "enable_zerocopy_send_server": true, 00:19:02.800 "enable_zerocopy_send_client": false, 00:19:02.800 "zerocopy_threshold": 0, 00:19:02.800 "tls_version": 0, 00:19:02.800 "enable_ktls": false 00:19:02.800 } 00:19:02.800 } 00:19:02.800 ] 00:19:02.800 }, 00:19:02.800 { 00:19:02.800 "subsystem": "vmd", 00:19:02.800 "config": [] 00:19:02.800 }, 00:19:02.800 { 00:19:02.800 "subsystem": "accel", 00:19:02.800 "config": [ 00:19:02.800 { 00:19:02.800 "method": "accel_set_options", 00:19:02.800 "params": { 00:19:02.800 "small_cache_size": 128, 00:19:02.800 "large_cache_size": 16, 00:19:02.800 "task_count": 2048, 00:19:02.800 "sequence_count": 2048, 00:19:02.800 "buf_count": 2048 00:19:02.800 } 00:19:02.800 } 00:19:02.800 ] 00:19:02.800 }, 00:19:02.800 { 00:19:02.800 "subsystem": "bdev", 00:19:02.800 "config": [ 00:19:02.800 { 00:19:02.800 "method": "bdev_set_options", 00:19:02.800 "params": { 00:19:02.800 "bdev_io_pool_size": 65535, 00:19:02.800 "bdev_io_cache_size": 256, 00:19:02.800 "bdev_auto_examine": true, 00:19:02.800 "iobuf_small_cache_size": 128, 00:19:02.800 "iobuf_large_cache_size": 16 00:19:02.800 } 00:19:02.800 }, 00:19:02.800 { 00:19:02.800 "method": "bdev_raid_set_options", 00:19:02.800 "params": { 00:19:02.800 "process_window_size_kb": 1024, 00:19:02.800 "process_max_bandwidth_mb_sec": 0 00:19:02.800 } 00:19:02.800 }, 00:19:02.800 { 00:19:02.800 "method": "bdev_iscsi_set_options", 00:19:02.800 "params": { 00:19:02.800 "timeout_sec": 30 00:19:02.800 } 00:19:02.800 }, 00:19:02.800 { 00:19:02.800 "method": "bdev_nvme_set_options", 00:19:02.800 "params": { 00:19:02.800 "action_on_timeout": "none", 00:19:02.800 "timeout_us": 0, 00:19:02.800 "timeout_admin_us": 0, 00:19:02.800 "keep_alive_timeout_ms": 10000, 00:19:02.800 "arbitration_burst": 0, 00:19:02.800 "low_priority_weight": 0, 00:19:02.800 "medium_priority_weight": 0, 00:19:02.800 "high_priority_weight": 0, 00:19:02.800 "nvme_adminq_poll_period_us": 10000, 00:19:02.800 "nvme_ioq_poll_period_us": 0, 00:19:02.800 "io_queue_requests": 512, 00:19:02.800 "delay_cmd_submit": true, 00:19:02.800 "transport_retry_count": 4, 00:19:02.801 "bdev_retry_count": 3, 00:19:02.801 "transport_ack_timeout": 0, 00:19:02.801 "ctrlr_loss_timeout_sec": 0, 00:19:02.801 "reconnect_delay_sec": 0, 00:19:02.801 "fast_io_fail_timeout_sec": 0, 00:19:02.801 "disable_auto_failback": false, 00:19:02.801 "generate_uuids": false, 00:19:02.801 "transport_tos": 0, 00:19:02.801 "nvme_error_stat": false, 00:19:02.801 "rdma_srq_size": 0, 00:19:02.801 "io_path_stat": false, 00:19:02.801 "allow_accel_sequence": false, 00:19:02.801 "rdma_max_cq_size": 0, 00:19:02.801 "rdma_cm_event_timeout_ms": 0, 00:19:02.801 "dhchap_digests": [ 00:19:02.801 "sha256", 00:19:02.801 "sha384", 00:19:02.801 "sha512" 00:19:02.801 ], 00:19:02.801 "dhchap_dhgroups": [ 00:19:02.801 "null", 00:19:02.801 "ffdhe2048", 00:19:02.801 "ffdhe3072", 00:19:02.801 "ffdhe4096", 00:19:02.801 "ffdhe6144", 00:19:02.801 "ffdhe8192" 00:19:02.801 ] 00:19:02.801 } 00:19:02.801 }, 00:19:02.801 { 00:19:02.801 "method": "bdev_nvme_attach_controller", 00:19:02.801 "params": { 00:19:02.801 "name": "nvme0", 00:19:02.801 "trtype": "TCP", 00:19:02.801 "adrfam": "IPv4", 00:19:02.801 "traddr": "10.0.0.2", 00:19:02.801 "trsvcid": "4420", 00:19:02.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.801 "prchk_reftag": false, 00:19:02.801 "prchk_guard": false, 00:19:02.801 "ctrlr_loss_timeout_sec": 0, 00:19:02.801 "reconnect_delay_sec": 0, 00:19:02.801 "fast_io_fail_timeout_sec": 0, 00:19:02.801 "psk": "key0", 00:19:02.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.801 "hdgst": false, 00:19:02.801 "ddgst": false 00:19:02.801 } 00:19:02.801 }, 00:19:02.801 { 00:19:02.801 "method": "bdev_nvme_set_hotplug", 00:19:02.801 "params": { 00:19:02.801 "period_us": 100000, 00:19:02.801 "enable": false 00:19:02.801 } 00:19:02.801 }, 00:19:02.801 { 00:19:02.801 "method": "bdev_enable_histogram", 00:19:02.801 "params": { 00:19:02.801 "name": "nvme0n1", 00:19:02.801 "enable": true 00:19:02.801 } 00:19:02.801 }, 00:19:02.801 { 00:19:02.801 "method": "bdev_wait_for_examine" 00:19:02.801 } 00:19:02.801 ] 00:19:02.801 }, 00:19:02.801 { 00:19:02.801 "subsystem": "nbd", 00:19:02.801 "config": [] 00:19:02.801 } 00:19:02.801 ] 00:19:02.801 }' 00:19:02.801 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1201796 00:19:02.801 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1201796 ']' 00:19:02.801 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1201796 00:19:02.801 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:19:02.801 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:19:02.801 19:48:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1201796 00:19:02.801 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:19:02.801 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:19:02.801 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1201796' 00:19:02.801 killing process with pid 1201796 00:19:02.801 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1201796 00:19:02.801 Received shutdown signal, test time was about 1.000000 seconds 00:19:02.801 00:19:02.801 Latency(us) 00:19:02.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.801 =================================================================================================================== 00:19:02.801 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.801 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1201796 00:19:03.062 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1201649 00:19:03.062 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1201649 ']' 00:19:03.062 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1201649 00:19:03.062 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:19:03.062 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:19:03.062 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1201649 00:19:03.062 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:19:03.062 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:19:03.062 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1201649' 00:19:03.062 killing process with pid 1201649 00:19:03.062 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1201649 00:19:03.062 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1201649 00:19:03.321 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:03.321 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:19:03.321 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:03.321 "subsystems": [ 00:19:03.321 { 00:19:03.321 "subsystem": "keyring", 00:19:03.321 "config": [ 00:19:03.321 { 00:19:03.321 "method": "keyring_file_add_key", 00:19:03.321 "params": { 00:19:03.321 "name": "key0", 00:19:03.321 "path": "/tmp/tmp.lzsIFDHYc9" 00:19:03.321 } 00:19:03.321 } 00:19:03.321 ] 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "subsystem": "iobuf", 00:19:03.321 "config": [ 00:19:03.321 { 00:19:03.321 "method": "iobuf_set_options", 00:19:03.321 "params": { 00:19:03.321 "small_pool_count": 8192, 00:19:03.321 "large_pool_count": 1024, 00:19:03.321 "small_bufsize": 8192, 00:19:03.321 "large_bufsize": 135168 00:19:03.321 } 00:19:03.321 } 00:19:03.321 ] 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "subsystem": "sock", 00:19:03.321 "config": [ 00:19:03.321 { 00:19:03.321 "method": "sock_set_default_impl", 00:19:03.321 "params": { 00:19:03.321 "impl_name": "posix" 00:19:03.321 } 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "method": "sock_impl_set_options", 00:19:03.321 "params": { 00:19:03.321 "impl_name": "ssl", 00:19:03.321 "recv_buf_size": 4096, 00:19:03.321 "send_buf_size": 4096, 00:19:03.321 "enable_recv_pipe": true, 00:19:03.321 "enable_quickack": false, 00:19:03.321 "enable_placement_id": 0, 00:19:03.321 "enable_zerocopy_send_server": true, 00:19:03.321 "enable_zerocopy_send_client": false, 00:19:03.321 "zerocopy_threshold": 0, 00:19:03.321 "tls_version": 0, 00:19:03.321 "enable_ktls": false 00:19:03.321 } 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "method": "sock_impl_set_options", 00:19:03.321 "params": { 00:19:03.321 "impl_name": "posix", 00:19:03.321 "recv_buf_size": 2097152, 00:19:03.321 "send_buf_size": 2097152, 00:19:03.321 "enable_recv_pipe": true, 00:19:03.321 "enable_quickack": false, 00:19:03.321 "enable_placement_id": 0, 00:19:03.321 "enable_zerocopy_send_server": true, 00:19:03.321 "enable_zerocopy_send_client": false, 00:19:03.321 "zerocopy_threshold": 0, 00:19:03.321 "tls_version": 0, 00:19:03.321 "enable_ktls": false 00:19:03.321 } 00:19:03.321 } 00:19:03.321 ] 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "subsystem": "vmd", 00:19:03.321 "config": [] 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "subsystem": "accel", 00:19:03.321 "config": [ 00:19:03.321 { 00:19:03.321 "method": "accel_set_options", 00:19:03.321 "params": { 00:19:03.321 "small_cache_size": 128, 00:19:03.321 "large_cache_size": 16, 00:19:03.321 "task_count": 2048, 00:19:03.321 "sequence_count": 2048, 00:19:03.321 "buf_count": 2048 00:19:03.321 } 00:19:03.321 } 00:19:03.321 ] 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "subsystem": "bdev", 00:19:03.321 "config": [ 00:19:03.321 { 00:19:03.321 "method": "bdev_set_options", 00:19:03.321 "params": { 00:19:03.321 "bdev_io_pool_size": 65535, 00:19:03.321 "bdev_io_cache_size": 256, 00:19:03.321 "bdev_auto_examine": true, 00:19:03.321 "iobuf_small_cache_size": 128, 00:19:03.321 "iobuf_large_cache_size": 16 00:19:03.321 } 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "method": "bdev_raid_set_options", 00:19:03.321 "params": { 00:19:03.321 "process_window_size_kb": 1024, 00:19:03.321 "process_max_bandwidth_mb_sec": 0 00:19:03.321 } 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "method": "bdev_iscsi_set_options", 00:19:03.321 "params": { 00:19:03.321 "timeout_sec": 30 00:19:03.321 } 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "method": "bdev_nvme_set_options", 00:19:03.321 "params": { 00:19:03.321 "action_on_timeout": "none", 00:19:03.321 "timeout_us": 0, 00:19:03.321 "timeout_admin_us": 0, 00:19:03.321 "keep_alive_timeout_ms": 10000, 00:19:03.321 "arbitration_burst": 0, 00:19:03.321 "low_priority_weight": 0, 00:19:03.321 "medium_priority_weight": 0, 00:19:03.321 "high_priority_weight": 0, 00:19:03.321 "nvme_adminq_poll_period_us": 10000, 00:19:03.321 "nvme_ioq_poll_period_us": 0, 00:19:03.321 "io_queue_requests": 0, 00:19:03.321 "delay_cmd_submit": true, 00:19:03.321 "transport_retry_count": 4, 00:19:03.321 "bdev_retry_count": 3, 00:19:03.321 "transport_ack_timeout": 0, 00:19:03.321 "ctrlr_loss_timeout_sec": 0, 00:19:03.321 "reconnect_delay_sec": 0, 00:19:03.321 "fast_io_fail_timeout_sec": 0, 00:19:03.321 "disable_auto_failback": false, 00:19:03.321 "generate_uuids": false, 00:19:03.321 "transport_tos": 0, 00:19:03.321 "nvme_error_stat": false, 00:19:03.321 "rdma_srq_size": 0, 00:19:03.321 "io_path_stat": false, 00:19:03.321 "allow_accel_sequence": false, 00:19:03.321 "rdma_max_cq_size": 0, 00:19:03.321 "rdma_cm_event_timeout_ms": 0, 00:19:03.321 "dhchap_digests": [ 00:19:03.321 "sha256", 00:19:03.321 "sha384", 00:19:03.321 "sha512" 00:19:03.321 ], 00:19:03.321 "dhchap_dhgroups": [ 00:19:03.321 "null", 00:19:03.321 "ffdhe2048", 00:19:03.321 "ffdhe3072", 00:19:03.321 "ffdhe4096", 00:19:03.321 "ffdhe6144", 00:19:03.321 "ffdhe8192" 00:19:03.321 ] 00:19:03.321 } 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "method": "bdev_nvme_set_hotplug", 00:19:03.321 "params": { 00:19:03.321 "period_us": 100000, 00:19:03.321 "enable": false 00:19:03.321 } 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "method": "bdev_malloc_create", 00:19:03.321 "params": { 00:19:03.321 "name": "malloc0", 00:19:03.321 "num_blocks": 8192, 00:19:03.321 "block_size": 4096, 00:19:03.321 "physical_block_size": 4096, 00:19:03.321 "uuid": "c91c8918-46fc-46d9-9f2f-4f2eca6fe48f", 00:19:03.321 "optimal_io_boundary": 0, 00:19:03.321 "md_size": 0, 00:19:03.321 "dif_type": 0, 00:19:03.321 "dif_is_head_of_md": false, 00:19:03.321 "dif_pi_format": 0 00:19:03.321 } 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "method": "bdev_wait_for_examine" 00:19:03.321 } 00:19:03.321 ] 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "subsystem": "nbd", 00:19:03.321 "config": [] 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "subsystem": "scheduler", 00:19:03.321 "config": [ 00:19:03.321 { 00:19:03.321 "method": "framework_set_scheduler", 00:19:03.321 "params": { 00:19:03.321 "name": "static" 00:19:03.321 } 00:19:03.321 } 00:19:03.321 ] 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "subsystem": "nvmf", 00:19:03.321 "config": [ 00:19:03.321 { 00:19:03.321 "method": "nvmf_set_config", 00:19:03.321 "params": { 00:19:03.321 "discovery_filter": "match_any", 00:19:03.321 "admin_cmd_passthru": { 00:19:03.321 "identify_ctrlr": false 00:19:03.321 } 00:19:03.321 } 00:19:03.321 }, 00:19:03.321 { 00:19:03.321 "method": "nvmf_set_max_subsystems", 00:19:03.321 "params": { 00:19:03.322 "max_subsystems": 1024 00:19:03.322 } 00:19:03.322 }, 00:19:03.322 { 00:19:03.322 "method": "nvmf_set_crdt", 00:19:03.322 "params": { 00:19:03.322 "crdt1": 0, 00:19:03.322 "crdt2": 0, 00:19:03.322 "crdt3": 0 00:19:03.322 } 00:19:03.322 }, 00:19:03.322 { 00:19:03.322 "method": "nvmf_create_transport", 00:19:03.322 "params": { 00:19:03.322 "trtype": "TCP", 00:19:03.322 "max_queue_depth": 128, 00:19:03.322 "max_io_qpairs_per_ctrlr": 127, 00:19:03.322 "in_capsule_data_size": 4096, 00:19:03.322 "max_io_size": 131072, 00:19:03.322 "io_unit_size": 131072, 00:19:03.322 "max_aq_depth": 128, 00:19:03.322 "num_shared_buffers": 511, 00:19:03.322 "buf_cache_size": 4294967295, 00:19:03.322 "dif_insert_or_strip": false, 00:19:03.322 "zcopy": false, 00:19:03.322 "c2h_success": false, 00:19:03.322 "sock_priority": 0, 00:19:03.322 "abort_timeout_sec": 1, 00:19:03.322 "ack_timeout": 0, 00:19:03.322 "data_wr_pool_size": 0 00:19:03.322 } 00:19:03.322 }, 00:19:03.322 { 00:19:03.322 "method": "nvmf_create_subsystem", 00:19:03.322 "params": { 00:19:03.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.322 "allow_any_host": false, 00:19:03.322 "serial_number": "00000000000000000000", 00:19:03.322 "model_number": "SPDK bdev Controller", 00:19:03.322 "max_namespaces": 32, 00:19:03.322 "min_cntlid": 1, 00:19:03.322 "max_cntlid": 65519, 00:19:03.322 "ana_reporting": false 00:19:03.322 } 00:19:03.322 }, 00:19:03.322 { 00:19:03.322 "method": "nvmf_subsystem_add_host", 00:19:03.322 "params": { 00:19:03.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.322 "host": "nqn.2016-06.io.spdk:host1", 00:19:03.322 "psk": "key0" 00:19:03.322 } 00:19:03.322 }, 00:19:03.322 { 00:19:03.322 "method": "nvmf_subsystem_add_ns", 00:19:03.322 "params": { 00:19:03.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.322 "namespace": { 00:19:03.322 "nsid": 1, 00:19:03.322 "bdev_name": "malloc0", 00:19:03.322 "nguid": "C91C891846FC46D99F2F4F2ECA6FE48F", 00:19:03.322 "uuid": "c91c8918-46fc-46d9-9f2f-4f2eca6fe48f", 00:19:03.322 "no_auto_visible": false 00:19:03.322 } 00:19:03.322 } 00:19:03.322 }, 00:19:03.322 { 00:19:03.322 "method": "nvmf_subsystem_add_listener", 00:19:03.322 "params": { 00:19:03.322 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:03.322 "listen_address": { 00:19:03.322 "trtype": "TCP", 00:19:03.322 "adrfam": "IPv4", 00:19:03.322 "traddr": "10.0.0.2", 00:19:03.322 "trsvcid": "4420" 00:19:03.322 }, 00:19:03.322 "secure_channel": false, 00:19:03.322 "sock_impl": "ssl" 00:19:03.322 } 00:19:03.322 } 00:19:03.322 ] 00:19:03.322 } 00:19:03.322 ] 00:19:03.322 }' 00:19:03.322 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@725 -- # xtrace_disable 00:19:03.322 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.322 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@485 -- # nvmfpid=1202165 00:19:03.322 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:03.322 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@486 -- # waitforlisten 1202165 00:19:03.322 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1202165 ']' 00:19:03.322 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.322 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:19:03.322 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.322 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:19:03.322 19:48:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.322 [2024-07-24 19:48:20.684094] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:19:03.322 [2024-07-24 19:48:20.684197] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:03.580 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.580 [2024-07-24 19:48:20.754370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.580 [2024-07-24 19:48:20.869588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:03.580 [2024-07-24 19:48:20.869647] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:03.580 [2024-07-24 19:48:20.869672] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:03.580 [2024-07-24 19:48:20.869686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:03.580 [2024-07-24 19:48:20.869698] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:03.580 [2024-07-24 19:48:20.869774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.839 [2024-07-24 19:48:21.109733] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.839 [2024-07-24 19:48:21.153959] tcp.c:1030:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.839 [2024-07-24 19:48:21.154238] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@731 -- # xtrace_disable 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1202319 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1202319 /var/tmp/bdevperf.sock 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@832 -- # '[' -z 1202319 ']' 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local max_retries=100 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@841 -- # xtrace_disable 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:04.406 19:48:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:19:04.406 "subsystems": [ 00:19:04.406 { 00:19:04.406 "subsystem": "keyring", 00:19:04.406 "config": [ 00:19:04.406 { 00:19:04.406 "method": "keyring_file_add_key", 00:19:04.406 "params": { 00:19:04.406 "name": "key0", 00:19:04.406 "path": "/tmp/tmp.lzsIFDHYc9" 00:19:04.406 } 00:19:04.406 } 00:19:04.406 ] 00:19:04.406 }, 00:19:04.406 { 00:19:04.406 "subsystem": "iobuf", 00:19:04.406 "config": [ 00:19:04.406 { 00:19:04.406 "method": "iobuf_set_options", 00:19:04.406 "params": { 00:19:04.406 "small_pool_count": 8192, 00:19:04.406 "large_pool_count": 1024, 00:19:04.406 "small_bufsize": 8192, 00:19:04.406 "large_bufsize": 135168 00:19:04.406 } 00:19:04.406 } 00:19:04.406 ] 00:19:04.406 }, 00:19:04.406 { 00:19:04.406 "subsystem": "sock", 00:19:04.406 "config": [ 00:19:04.406 { 00:19:04.406 "method": "sock_set_default_impl", 00:19:04.406 "params": { 00:19:04.406 "impl_name": "posix" 00:19:04.406 } 00:19:04.406 }, 00:19:04.406 { 00:19:04.406 "method": "sock_impl_set_options", 00:19:04.406 "params": { 00:19:04.406 "impl_name": "ssl", 00:19:04.406 "recv_buf_size": 4096, 00:19:04.406 "send_buf_size": 4096, 00:19:04.406 "enable_recv_pipe": true, 00:19:04.406 "enable_quickack": false, 00:19:04.406 "enable_placement_id": 0, 00:19:04.406 "enable_zerocopy_send_server": true, 00:19:04.406 "enable_zerocopy_send_client": false, 00:19:04.406 "zerocopy_threshold": 0, 00:19:04.406 "tls_version": 0, 00:19:04.406 "enable_ktls": false 00:19:04.406 } 00:19:04.406 }, 00:19:04.406 { 00:19:04.406 "method": "sock_impl_set_options", 00:19:04.406 "params": { 00:19:04.406 "impl_name": "posix", 00:19:04.406 "recv_buf_size": 2097152, 00:19:04.406 "send_buf_size": 2097152, 00:19:04.406 "enable_recv_pipe": true, 00:19:04.406 "enable_quickack": false, 00:19:04.406 "enable_placement_id": 0, 00:19:04.406 "enable_zerocopy_send_server": true, 00:19:04.406 "enable_zerocopy_send_client": false, 00:19:04.406 "zerocopy_threshold": 0, 00:19:04.406 "tls_version": 0, 00:19:04.406 "enable_ktls": false 00:19:04.406 } 00:19:04.406 } 00:19:04.406 ] 00:19:04.406 }, 00:19:04.406 { 00:19:04.406 "subsystem": "vmd", 00:19:04.406 "config": [] 00:19:04.406 }, 00:19:04.406 { 00:19:04.406 "subsystem": "accel", 00:19:04.406 "config": [ 00:19:04.406 { 00:19:04.406 "method": "accel_set_options", 00:19:04.406 "params": { 00:19:04.406 "small_cache_size": 128, 00:19:04.406 "large_cache_size": 16, 00:19:04.406 "task_count": 2048, 00:19:04.406 "sequence_count": 2048, 00:19:04.406 "buf_count": 2048 00:19:04.406 } 00:19:04.406 } 00:19:04.406 ] 00:19:04.406 }, 00:19:04.406 { 00:19:04.406 "subsystem": "bdev", 00:19:04.406 "config": [ 00:19:04.406 { 00:19:04.406 "method": "bdev_set_options", 00:19:04.406 "params": { 00:19:04.406 "bdev_io_pool_size": 65535, 00:19:04.406 "bdev_io_cache_size": 256, 00:19:04.406 "bdev_auto_examine": true, 00:19:04.406 "iobuf_small_cache_size": 128, 00:19:04.406 "iobuf_large_cache_size": 16 00:19:04.406 } 00:19:04.406 }, 00:19:04.406 { 00:19:04.406 "method": "bdev_raid_set_options", 00:19:04.406 "params": { 00:19:04.406 "process_window_size_kb": 1024, 00:19:04.406 "process_max_bandwidth_mb_sec": 0 00:19:04.406 } 00:19:04.406 }, 00:19:04.406 { 00:19:04.406 "method": "bdev_iscsi_set_options", 00:19:04.406 "params": { 00:19:04.406 "timeout_sec": 30 00:19:04.406 } 00:19:04.406 }, 00:19:04.406 { 00:19:04.406 "method": "bdev_nvme_set_options", 00:19:04.406 "params": { 00:19:04.406 "action_on_timeout": "none", 00:19:04.406 "timeout_us": 0, 00:19:04.406 "timeout_admin_us": 0, 00:19:04.406 "keep_alive_timeout_ms": 10000, 00:19:04.406 "arbitration_burst": 0, 00:19:04.406 "low_priority_weight": 0, 00:19:04.406 "medium_priority_weight": 0, 00:19:04.406 "high_priority_weight": 0, 00:19:04.406 "nvme_adminq_poll_period_us": 10000, 00:19:04.406 "nvme_ioq_poll_period_us": 0, 00:19:04.406 "io_queue_requests": 512, 00:19:04.406 "delay_cmd_submit": true, 00:19:04.406 "transport_retry_count": 4, 00:19:04.406 "bdev_retry_count": 3, 00:19:04.406 "transport_ack_timeout": 0, 00:19:04.406 "ctrlr_loss_timeout_sec": 0, 00:19:04.406 "reconnect_delay_sec": 0, 00:19:04.406 "fast_io_fail_timeout_sec": 0, 00:19:04.406 "disable_auto_failback": false, 00:19:04.406 "generate_uuids": false, 00:19:04.406 "transport_tos": 0, 00:19:04.406 "nvme_error_stat": false, 00:19:04.406 "rdma_srq_size": 0, 00:19:04.406 "io_path_stat": false, 00:19:04.406 "allow_accel_sequence": false, 00:19:04.406 "rdma_max_cq_size": 0, 00:19:04.406 "rdma_cm_event_timeout_ms": 0, 00:19:04.406 "dhchap_digests": [ 00:19:04.406 "sha256", 00:19:04.406 "sha384", 00:19:04.406 "sha512" 00:19:04.406 ], 00:19:04.406 "dhchap_dhgroups": [ 00:19:04.406 "null", 00:19:04.406 "ffdhe2048", 00:19:04.406 "ffdhe3072", 00:19:04.406 "ffdhe4096", 00:19:04.406 "ffdhe6144", 00:19:04.406 "ffdhe8192" 00:19:04.406 ] 00:19:04.406 } 00:19:04.406 }, 00:19:04.406 { 00:19:04.406 "method": "bdev_nvme_attach_controller", 00:19:04.406 "params": { 00:19:04.406 "name": "nvme0", 00:19:04.406 "trtype": "TCP", 00:19:04.406 "adrfam": "IPv4", 00:19:04.406 "traddr": "10.0.0.2", 00:19:04.406 "trsvcid": "4420", 00:19:04.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.406 "prchk_reftag": false, 00:19:04.406 "prchk_guard": false, 00:19:04.406 "ctrlr_loss_timeout_sec": 0, 00:19:04.406 "reconnect_delay_sec": 0, 00:19:04.406 "fast_io_fail_timeout_sec": 0, 00:19:04.406 "psk": "key0", 00:19:04.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:04.406 "hdgst": false, 00:19:04.406 "ddgst": false 00:19:04.406 } 00:19:04.406 }, 00:19:04.406 { 00:19:04.406 "method": "bdev_nvme_set_hotplug", 00:19:04.406 "params": { 00:19:04.406 "period_us": 100000, 00:19:04.406 "enable": false 00:19:04.406 } 00:19:04.406 }, 00:19:04.406 { 00:19:04.406 "method": "bdev_enable_histogram", 00:19:04.406 "params": { 00:19:04.406 "name": "nvme0n1", 00:19:04.406 "enable": true 00:19:04.406 } 00:19:04.406 }, 00:19:04.406 { 00:19:04.406 "method": "bdev_wait_for_examine" 00:19:04.406 } 00:19:04.406 ] 00:19:04.406 }, 00:19:04.406 { 00:19:04.406 "subsystem": "nbd", 00:19:04.406 "config": [] 00:19:04.406 } 00:19:04.406 ] 00:19:04.406 }' 00:19:04.406 [2024-07-24 19:48:21.691355] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:19:04.406 [2024-07-24 19:48:21.691428] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202319 ] 00:19:04.406 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.406 [2024-07-24 19:48:21.753341] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.664 [2024-07-24 19:48:21.869523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.921 [2024-07-24 19:48:22.054265] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.484 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:19:05.484 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@865 -- # return 0 00:19:05.484 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:05.484 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:19:05.741 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.741 19:48:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:05.741 Running I/O for 1 seconds... 00:19:07.116 00:19:07.116 Latency(us) 00:19:07.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.116 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:07.116 Verification LBA range: start 0x0 length 0x2000 00:19:07.116 nvme0n1 : 1.02 3466.81 13.54 0.00 0.00 36534.44 6068.15 39418.69 00:19:07.116 =================================================================================================================== 00:19:07.116 Total : 3466.81 13.54 0.00 0.00 36534.44 6068.15 39418.69 00:19:07.116 0 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # type=--id 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # id=0 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # '[' --id = --pid ']' 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@815 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@815 -- # shm_files=nvmf_trace.0 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@817 -- # [[ -z nvmf_trace.0 ]] 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # for n in $shm_files 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:07.116 nvmf_trace.0 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # return 0 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1202319 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1202319 ']' 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1202319 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1202319 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1202319' 00:19:07.116 killing process with pid 1202319 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1202319 00:19:07.116 Received shutdown signal, test time was about 1.000000 seconds 00:19:07.116 00:19:07.116 Latency(us) 00:19:07.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.116 =================================================================================================================== 00:19:07.116 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1202319 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # nvmfcleanup 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:07.116 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:07.116 rmmod nvme_tcp 00:19:07.116 rmmod nvme_fabrics 00:19:07.374 rmmod nvme_keyring 00:19:07.375 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:07.375 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:07.375 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:07.375 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # '[' -n 1202165 ']' 00:19:07.375 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # killprocess 1202165 00:19:07.375 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@951 -- # '[' -z 1202165 ']' 00:19:07.375 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # kill -0 1202165 00:19:07.375 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # uname 00:19:07.375 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:19:07.375 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1202165 00:19:07.375 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:19:07.375 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:19:07.375 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1202165' 00:19:07.375 killing process with pid 1202165 00:19:07.375 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # kill 1202165 00:19:07.375 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@975 -- # wait 1202165 00:19:07.636 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:19:07.636 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:19:07.636 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:19:07.636 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.636 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@282 -- # remove_spdk_ns 00:19:07.636 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.636 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.636 19:48:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.537 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:19:09.537 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.kr4jfbZS8Z /tmp/tmp.sftTT6srO9 /tmp/tmp.lzsIFDHYc9 00:19:09.537 00:19:09.537 real 1m22.872s 00:19:09.537 user 2m14.792s 00:19:09.537 sys 0m24.841s 00:19:09.537 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # xtrace_disable 00:19:09.537 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.537 ************************************ 00:19:09.537 END TEST nvmf_tls 00:19:09.537 ************************************ 00:19:09.537 19:48:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:09.537 19:48:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:19:09.538 19:48:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:19:09.538 19:48:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:09.796 ************************************ 00:19:09.796 START TEST nvmf_fips 00:19:09.796 ************************************ 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:09.796 * Looking for test storage... 00:19:09.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:09.796 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:09.797 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:09.797 19:48:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # local es=0 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@639 -- # local arg=openssl 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@643 -- # type -t openssl 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@645 -- # type -P openssl 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@645 -- # arg=/usr/bin/openssl 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@645 -- # [[ -x /usr/bin/openssl ]] 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # openssl md5 /dev/fd/62 00:19:09.797 Error setting digest 00:19:09.797 00023B2FA07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:09.797 00023B2FA07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # es=1 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@452 -- # prepare_net_devs 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # local -g is_hw=no 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # remove_spdk_ns 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.797 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:09.798 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.798 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:19:09.798 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:19:09.798 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # xtrace_disable 00:19:09.798 19:48:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # pci_devs=() 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -a pci_devs 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # pci_net_devs=() 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # pci_drivers=() 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -A pci_drivers 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # net_devs=() 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # local -ga net_devs 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # e810=() 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # local -ga e810 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # x722=() 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # local -ga x722 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # mlx=() 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # local -ga mlx 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:11.701 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:11.701 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # [[ up == up ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:11.701 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # [[ up == up ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:11.701 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # is_hw=yes 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:19:11.701 19:48:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:19:11.701 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:19:11.701 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.701 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.701 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.701 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:19:11.701 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.702 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.702 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:19:11.702 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.702 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.702 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:19:11.702 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:19:11.702 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.702 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.702 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.702 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.702 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:19:11.702 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:19:11.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:19:11.962 00:19:11.962 --- 10.0.0.2 ping statistics --- 00:19:11.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.962 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:19:11.962 00:19:11.962 --- 10.0.0.1 ping statistics --- 00:19:11.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.962 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # return 0 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@725 -- # xtrace_disable 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@485 -- # nvmfpid=1204677 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@486 -- # waitforlisten 1204677 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@832 -- # '[' -z 1204677 ']' 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local max_retries=100 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@841 -- # xtrace_disable 00:19:11.962 19:48:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:11.962 [2024-07-24 19:48:29.234336] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:19:11.962 [2024-07-24 19:48:29.234414] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.962 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.962 [2024-07-24 19:48:29.296768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.222 [2024-07-24 19:48:29.407901] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.222 [2024-07-24 19:48:29.407955] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.222 [2024-07-24 19:48:29.407984] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.222 [2024-07-24 19:48:29.407996] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.222 [2024-07-24 19:48:29.408006] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.222 [2024-07-24 19:48:29.408031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.162 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:19:13.162 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@865 -- # return 0 00:19:13.162 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:19:13.162 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@731 -- # xtrace_disable 00:19:13.162 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:13.163 [2024-07-24 19:48:30.469843] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.163 [2024-07-24 19:48:30.485832] tcp.c:1030:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:13.163 [2024-07-24 19:48:30.486081] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.163 [2024-07-24 19:48:30.518393] tcp.c:3812:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:13.163 malloc0 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1204836 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1204836 /var/tmp/bdevperf.sock 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@832 -- # '[' -z 1204836 ']' 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local max_retries=100 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@841 -- # xtrace_disable 00:19:13.163 19:48:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:13.422 [2024-07-24 19:48:30.620237] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:19:13.422 [2024-07-24 19:48:30.620331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204836 ] 00:19:13.422 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.422 [2024-07-24 19:48:30.685123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.422 [2024-07-24 19:48:30.797581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.358 19:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:19:14.358 19:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@865 -- # return 0 00:19:14.358 19:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:14.618 [2024-07-24 19:48:31.777333] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.618 [2024-07-24 19:48:31.777459] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:14.618 TLSTESTn1 00:19:14.618 19:48:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:14.618 Running I/O for 10 seconds... 00:19:24.667 00:19:24.667 Latency(us) 00:19:24.667 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.667 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:24.667 Verification LBA range: start 0x0 length 0x2000 00:19:24.667 TLSTESTn1 : 10.03 3456.20 13.50 0.00 0.00 36959.92 6092.42 42913.94 00:19:24.667 =================================================================================================================== 00:19:24.667 Total : 3456.20 13.50 0.00 0.00 36959.92 6092.42 42913.94 00:19:24.667 0 00:19:24.667 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:24.667 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:24.667 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # type=--id 00:19:24.667 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # id=0 00:19:24.667 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # '[' --id = --pid ']' 00:19:24.667 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@815 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:24.926 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@815 -- # shm_files=nvmf_trace.0 00:19:24.926 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@817 -- # [[ -z nvmf_trace.0 ]] 00:19:24.926 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # for n in $shm_files 00:19:24.926 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:24.926 nvmf_trace.0 00:19:24.926 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # return 0 00:19:24.926 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1204836 00:19:24.926 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@951 -- # '[' -z 1204836 ']' 00:19:24.926 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # kill -0 1204836 00:19:24.926 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # uname 00:19:24.926 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:19:24.926 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1204836 00:19:24.926 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # process_name=reactor_2 00:19:24.926 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@961 -- # '[' reactor_2 = sudo ']' 00:19:24.926 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1204836' 00:19:24.926 killing process with pid 1204836 00:19:24.926 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # kill 1204836 00:19:24.926 Received shutdown signal, test time was about 10.000000 seconds 00:19:24.926 00:19:24.926 Latency(us) 00:19:24.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:24.927 =================================================================================================================== 00:19:24.927 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:24.927 [2024-07-24 19:48:42.132525] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:24.927 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@975 -- # wait 1204836 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # nvmfcleanup 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:25.185 rmmod nvme_tcp 00:19:25.185 rmmod nvme_fabrics 00:19:25.185 rmmod nvme_keyring 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # '[' -n 1204677 ']' 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # killprocess 1204677 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@951 -- # '[' -z 1204677 ']' 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # kill -0 1204677 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # uname 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1204677 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1204677' 00:19:25.185 killing process with pid 1204677 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # kill 1204677 00:19:25.185 [2024-07-24 19:48:42.482179] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:25.185 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@975 -- # wait 1204677 00:19:25.443 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:19:25.443 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:19:25.443 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:19:25.443 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:25.443 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@282 -- # remove_spdk_ns 00:19:25.444 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.444 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.444 19:48:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.980 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:27.981 00:19:27.981 real 0m17.897s 00:19:27.981 user 0m24.159s 00:19:27.981 sys 0m5.224s 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # xtrace_disable 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:27.981 ************************************ 00:19:27.981 END TEST nvmf_fips 00:19:27.981 ************************************ 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:27.981 ************************************ 00:19:27.981 START TEST nvmf_control_msg_list 00:19:27.981 ************************************ 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:27.981 * Looking for test storage... 00:19:27.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:27.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@452 -- # prepare_net_devs 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # local -g is_hw=no 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # remove_spdk_ns 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@289 -- # xtrace_disable 00:19:27.981 19:48:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@295 -- # pci_devs=() 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@295 -- # local -a pci_devs 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # pci_net_devs=() 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # pci_drivers=() 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # local -A pci_drivers 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # net_devs=() 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # local -ga net_devs 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # e810=() 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # local -ga e810 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@301 -- # x722=() 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@301 -- # local -ga x722 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # mlx=() 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # local -ga mlx 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:29.887 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:29.887 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@394 -- # [[ up == up ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:29.887 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@394 -- # [[ up == up ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:29.887 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # is_hw=yes 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.887 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.888 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.888 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:19:29.888 19:48:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:19:29.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:19:29.888 00:19:29.888 --- 10.0.0.2 ping statistics --- 00:19:29.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.888 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:19:29.888 00:19:29.888 --- 10.0.0.1 ping statistics --- 00:19:29.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.888 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # return 0 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@725 -- # xtrace_disable 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@485 -- # nvmfpid=1208098 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@486 -- # waitforlisten 1208098 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@832 -- # '[' -z 1208098 ']' 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local max_retries=100 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@841 -- # xtrace_disable 00:19:29.888 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:29.888 [2024-07-24 19:48:47.095845] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:19:29.888 [2024-07-24 19:48:47.095918] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.888 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.888 [2024-07-24 19:48:47.160369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.147 [2024-07-24 19:48:47.267985] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:30.147 [2024-07-24 19:48:47.268033] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:30.147 [2024-07-24 19:48:47.268060] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:30.147 [2024-07-24 19:48:47.268071] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:30.147 [2024-07-24 19:48:47.268080] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:30.147 [2024-07-24 19:48:47.268109] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@865 -- # return 0 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@731 -- # xtrace_disable 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.147 [2024-07-24 19:48:47.415658] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.147 Malloc0 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:30.147 [2024-07-24 19:48:47.466496] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1208235 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1208236 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1208237 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1208235 00:19:30.147 19:48:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:30.147 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.147 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.147 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.407 [2024-07-24 19:48:47.585349] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:30.407 [2024-07-24 19:48:47.601297] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:30.407 [2024-07-24 19:48:47.616317] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:31.787 Initializing NVMe Controllers 00:19:31.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:31.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:31.787 Initialization complete. Launching workers. 00:19:31.787 ======================================================== 00:19:31.787 Latency(us) 00:19:31.787 Device Information : IOPS MiB/s Average min max 00:19:31.787 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 63.65 0.25 15960.21 15944.98 15978.54 00:19:31.787 ======================================================== 00:19:31.787 Total : 63.65 0.25 15960.21 15944.98 15978.54 00:19:31.787 00:19:31.787 Initializing NVMe Controllers 00:19:31.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:31.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:31.787 Initialization complete. Launching workers. 00:19:31.787 ======================================================== 00:19:31.787 Latency(us) 00:19:31.787 Device Information : IOPS MiB/s Average min max 00:19:31.787 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 63.71 0.25 15929.11 14964.84 16007.77 00:19:31.787 ======================================================== 00:19:31.787 Total : 63.71 0.25 15929.11 14964.84 16007.77 00:19:31.787 00:19:32.046 Initializing NVMe Controllers 00:19:32.046 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:32.046 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:32.046 Initialization complete. Launching workers. 00:19:32.046 ======================================================== 00:19:32.046 Latency(us) 00:19:32.046 Device Information : IOPS MiB/s Average min max 00:19:32.046 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 63.89 0.25 16677.35 12150.76 65657.87 00:19:32.046 ======================================================== 00:19:32.046 Total : 63.89 0.25 16677.35 12150.76 65657.87 00:19:32.046 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1208236 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1208237 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # nvmftestfini 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # nvmfcleanup 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:32.046 rmmod nvme_tcp 00:19:32.046 rmmod nvme_fabrics 00:19:32.046 rmmod nvme_keyring 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # '[' -n 1208098 ']' 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # killprocess 1208098 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@951 -- # '[' -z 1208098 ']' 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # kill -0 1208098 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # uname 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1208098 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1208098' 00:19:32.046 killing process with pid 1208098 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # kill 1208098 00:19:32.046 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@975 -- # wait 1208098 00:19:32.306 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:19:32.306 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:19:32.306 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:19:32.306 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:32.306 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@282 -- # remove_spdk_ns 00:19:32.306 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.306 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.306 19:48:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@1 -- # process_shm --id 0 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@809 -- # type=--id 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@810 -- # id=0 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@811 -- # '[' --id = --pid ']' 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@815 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@815 -- # shm_files=nvmf_trace.0 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@817 -- # [[ -z nvmf_trace.0 ]] 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@821 -- # for n in $shm_files 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@822 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:34.845 nvmf_trace.0 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@824 -- # return 0 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@1 -- # nvmftestfini 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # nvmfcleanup 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # '[' -n 1208098 ']' 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # killprocess 1208098 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@951 -- # '[' -z 1208098 ']' 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # kill -0 1208098 00:19:34.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 955: kill: (1208098) - No such process 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # echo 'Process with pid 1208098 is not found' 00:19:34.845 Process with pid 1208098 is not found 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@282 -- # remove_spdk_ns 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:19:34.845 00:19:34.845 real 0m6.866s 00:19:34.845 user 0m3.241s 00:19:34.845 sys 0m2.079s 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # xtrace_disable 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:34.845 ************************************ 00:19:34.845 END TEST nvmf_control_msg_list 00:19:34.845 ************************************ 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:34.845 ************************************ 00:19:34.845 START TEST nvmf_wait_for_buf 00:19:34.845 ************************************ 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:34.845 * Looking for test storage... 00:19:34.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.845 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@452 -- # prepare_net_devs 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # local -g is_hw=no 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # remove_spdk_ns 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@289 -- # xtrace_disable 00:19:34.846 19:48:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@295 -- # pci_devs=() 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@295 -- # local -a pci_devs 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # pci_net_devs=() 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # pci_drivers=() 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # local -A pci_drivers 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # net_devs=() 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # local -ga net_devs 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # e810=() 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # local -ga e810 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@301 -- # x722=() 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@301 -- # local -ga x722 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # mlx=() 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # local -ga mlx 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:36.749 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.749 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:36.750 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@394 -- # [[ up == up ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:36.750 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@394 -- # [[ up == up ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:36.750 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # is_hw=yes 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:19:36.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:19:36.750 00:19:36.750 --- 10.0.0.2 ping statistics --- 00:19:36.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.750 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:19:36.750 00:19:36.750 --- 10.0.0.1 ping statistics --- 00:19:36.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.750 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # return 0 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@725 -- # xtrace_disable 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@485 -- # nvmfpid=1210321 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@486 -- # waitforlisten 1210321 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@832 -- # '[' -z 1210321 ']' 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local max_retries=100 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@841 -- # xtrace_disable 00:19:36.750 19:48:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:36.750 [2024-07-24 19:48:53.911397] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:19:36.750 [2024-07-24 19:48:53.911488] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.750 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.750 [2024-07-24 19:48:53.976381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.750 [2024-07-24 19:48:54.083424] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.750 [2024-07-24 19:48:54.083478] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.750 [2024-07-24 19:48:54.083507] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.750 [2024-07-24 19:48:54.083519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.750 [2024-07-24 19:48:54.083529] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.750 [2024-07-24 19:48:54.083555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.750 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:19:36.750 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@865 -- # return 0 00:19:36.750 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:19:36.750 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@731 -- # xtrace_disable 00:19:36.750 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.008 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.009 Malloc0 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.009 [2024-07-24 19:48:54.248847] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:37.009 [2024-07-24 19:48:54.273054] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:37.009 19:48:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:37.009 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.009 [2024-07-24 19:48:54.351351] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:38.916 Initializing NVMe Controllers 00:19:38.916 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:38.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:38.916 Initialization complete. Launching workers. 00:19:38.916 ======================================================== 00:19:38.916 Latency(us) 00:19:38.916 Device Information : IOPS MiB/s Average min max 00:19:38.916 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 103.74 12.97 39940.04 8008.44 109730.86 00:19:38.916 ======================================================== 00:19:38.916 Total : 103.74 12.97 39940.04 8008.44 109730.86 00:19:38.916 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1638 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1638 -eq 0 ]] 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # nvmftestfini 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # nvmfcleanup 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.916 rmmod nvme_tcp 00:19:38.916 rmmod nvme_fabrics 00:19:38.916 rmmod nvme_keyring 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # '[' -n 1210321 ']' 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # killprocess 1210321 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@951 -- # '[' -z 1210321 ']' 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # kill -0 1210321 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # uname 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1210321 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1210321' 00:19:38.916 killing process with pid 1210321 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # kill 1210321 00:19:38.916 19:48:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@975 -- # wait 1210321 00:19:38.916 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:19:38.916 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:19:38.916 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:19:38.916 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:38.916 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@282 -- # remove_spdk_ns 00:19:38.916 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.916 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:38.916 19:48:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@1 -- # process_shm --id 0 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@809 -- # type=--id 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@810 -- # id=0 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@811 -- # '[' --id = --pid ']' 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@815 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@815 -- # shm_files=nvmf_trace.0 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@817 -- # [[ -z nvmf_trace.0 ]] 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@821 -- # for n in $shm_files 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@822 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:41.489 nvmf_trace.0 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@824 -- # return 0 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@1 -- # nvmftestfini 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # nvmfcleanup 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # '[' -n 1210321 ']' 00:19:41.489 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # killprocess 1210321 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@951 -- # '[' -z 1210321 ']' 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # kill -0 1210321 00:19:41.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 955: kill: (1210321) - No such process 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # echo 'Process with pid 1210321 is not found' 00:19:41.490 Process with pid 1210321 is not found 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@282 -- # remove_spdk_ns 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:19:41.490 00:19:41.490 real 0m6.525s 00:19:41.490 user 0m3.059s 00:19:41.490 sys 0m1.863s 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # xtrace_disable 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:41.490 ************************************ 00:19:41.490 END TEST nvmf_wait_for_buf 00:19:41.490 ************************************ 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # xtrace_disable 00:19:41.490 19:48:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # pci_devs=() 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -a pci_devs 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # pci_net_devs=() 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # pci_drivers=() 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -A pci_drivers 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@299 -- # net_devs=() 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@299 -- # local -ga net_devs 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@300 -- # e810=() 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@300 -- # local -ga e810 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # x722=() 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # local -ga x722 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # mlx=() 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # local -ga mlx 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:43.394 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:43.394 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # [[ up == up ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:43.394 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # [[ up == up ]] 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:43.394 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:19:43.394 19:49:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:43.395 ************************************ 00:19:43.395 START TEST nvmf_perf_adq 00:19:43.395 ************************************ 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:43.395 * Looking for test storage... 00:19:43.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:43.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # xtrace_disable 00:19:43.395 19:49:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:45.298 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.298 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # pci_devs=() 00:19:45.298 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -a pci_devs 00:19:45.298 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # pci_net_devs=() 00:19:45.298 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:19:45.298 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # pci_drivers=() 00:19:45.298 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -A pci_drivers 00:19:45.298 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@299 -- # net_devs=() 00:19:45.298 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@299 -- # local -ga net_devs 00:19:45.298 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@300 -- # e810=() 00:19:45.298 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@300 -- # local -ga e810 00:19:45.298 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # x722=() 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # local -ga x722 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # mlx=() 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # local -ga mlx 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:45.299 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:45.299 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # [[ up == up ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:45.299 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # [[ up == up ]] 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:45.299 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:45.299 19:49:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:45.868 19:49:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:47.784 19:49:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@452 -- # prepare_net_devs 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # local -g is_hw=no 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # remove_spdk_ns 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # xtrace_disable 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # pci_devs=() 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -a pci_devs 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # pci_net_devs=() 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # pci_drivers=() 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -A pci_drivers 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@299 -- # net_devs=() 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@299 -- # local -ga net_devs 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@300 -- # e810=() 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@300 -- # local -ga e810 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # x722=() 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # local -ga x722 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # mlx=() 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # local -ga mlx 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:53.058 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:53.058 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.058 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # [[ up == up ]] 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:53.059 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # [[ up == up ]] 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:53.059 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # is_hw=yes 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:19:53.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:19:53.059 00:19:53.059 --- 10.0.0.2 ping statistics --- 00:19:53.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.059 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:53.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:19:53.059 00:19:53.059 --- 10.0.0.1 ping statistics --- 00:19:53.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.059 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # return 0 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@725 -- # xtrace_disable 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@485 -- # nvmfpid=1215151 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@486 -- # waitforlisten 1215151 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@832 -- # '[' -z 1215151 ']' 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local max_retries=100 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@841 -- # xtrace_disable 00:19:53.059 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.059 [2024-07-24 19:49:10.242152] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:19:53.059 [2024-07-24 19:49:10.242252] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.059 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.059 [2024-07-24 19:49:10.309322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:53.059 [2024-07-24 19:49:10.422808] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.059 [2024-07-24 19:49:10.422878] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.059 [2024-07-24 19:49:10.422906] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.059 [2024-07-24 19:49:10.422918] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.059 [2024-07-24 19:49:10.422927] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.059 [2024-07-24 19:49:10.423011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.059 [2024-07-24 19:49:10.423076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.059 [2024-07-24 19:49:10.423140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:53.059 [2024-07-24 19:49:10.423143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@865 -- # return 0 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@731 -- # xtrace_disable 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.318 [2024-07-24 19:49:10.632360] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.318 Malloc1 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:53.318 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:53.318 [2024-07-24 19:49:10.685551] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.319 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:53.319 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1215189 00:19:53.319 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:53.319 19:49:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:53.577 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.480 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:55.480 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:19:55.480 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:55.480 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:19:55.480 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:55.480 "tick_rate": 2700000000, 00:19:55.480 "poll_groups": [ 00:19:55.480 { 00:19:55.480 "name": "nvmf_tgt_poll_group_000", 00:19:55.480 "admin_qpairs": 1, 00:19:55.480 "io_qpairs": 1, 00:19:55.480 "current_admin_qpairs": 1, 00:19:55.480 "current_io_qpairs": 1, 00:19:55.480 "pending_bdev_io": 0, 00:19:55.480 "completed_nvme_io": 19926, 00:19:55.480 "transports": [ 00:19:55.480 { 00:19:55.480 "trtype": "TCP" 00:19:55.480 } 00:19:55.480 ] 00:19:55.480 }, 00:19:55.480 { 00:19:55.480 "name": "nvmf_tgt_poll_group_001", 00:19:55.480 "admin_qpairs": 0, 00:19:55.480 "io_qpairs": 1, 00:19:55.480 "current_admin_qpairs": 0, 00:19:55.480 "current_io_qpairs": 1, 00:19:55.480 "pending_bdev_io": 0, 00:19:55.480 "completed_nvme_io": 20201, 00:19:55.480 "transports": [ 00:19:55.480 { 00:19:55.480 "trtype": "TCP" 00:19:55.480 } 00:19:55.480 ] 00:19:55.480 }, 00:19:55.480 { 00:19:55.480 "name": "nvmf_tgt_poll_group_002", 00:19:55.480 "admin_qpairs": 0, 00:19:55.480 "io_qpairs": 1, 00:19:55.480 "current_admin_qpairs": 0, 00:19:55.480 "current_io_qpairs": 1, 00:19:55.480 "pending_bdev_io": 0, 00:19:55.480 "completed_nvme_io": 19376, 00:19:55.480 "transports": [ 00:19:55.480 { 00:19:55.480 "trtype": "TCP" 00:19:55.480 } 00:19:55.480 ] 00:19:55.480 }, 00:19:55.480 { 00:19:55.480 "name": "nvmf_tgt_poll_group_003", 00:19:55.480 "admin_qpairs": 0, 00:19:55.480 "io_qpairs": 1, 00:19:55.480 "current_admin_qpairs": 0, 00:19:55.480 "current_io_qpairs": 1, 00:19:55.480 "pending_bdev_io": 0, 00:19:55.480 "completed_nvme_io": 19274, 00:19:55.480 "transports": [ 00:19:55.480 { 00:19:55.480 "trtype": "TCP" 00:19:55.480 } 00:19:55.480 ] 00:19:55.480 } 00:19:55.480 ] 00:19:55.480 }' 00:19:55.480 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:55.480 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:55.480 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:55.480 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:55.480 19:49:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1215189 00:20:03.602 Initializing NVMe Controllers 00:20:03.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:03.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:03.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:03.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:03.602 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:03.602 Initialization complete. Launching workers. 00:20:03.602 ======================================================== 00:20:03.602 Latency(us) 00:20:03.602 Device Information : IOPS MiB/s Average min max 00:20:03.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10440.90 40.78 6130.49 2512.82 9168.78 00:20:03.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10857.10 42.41 5894.19 2365.27 8190.38 00:20:03.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10749.40 41.99 5955.18 3132.46 8169.89 00:20:03.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10375.80 40.53 6168.12 2611.63 9327.44 00:20:03.602 ======================================================== 00:20:03.602 Total : 42423.19 165.72 6034.80 2365.27 9327.44 00:20:03.602 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # nvmfcleanup 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:03.602 rmmod nvme_tcp 00:20:03.602 rmmod nvme_fabrics 00:20:03.602 rmmod nvme_keyring 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # '[' -n 1215151 ']' 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # killprocess 1215151 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' -z 1215151 ']' 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # kill -0 1215151 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # uname 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1215151 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1215151' 00:20:03.602 killing process with pid 1215151 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # kill 1215151 00:20:03.602 19:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@975 -- # wait 1215151 00:20:04.172 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:20:04.172 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:20:04.172 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:20:04.172 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:04.172 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@282 -- # remove_spdk_ns 00:20:04.172 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.172 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.172 19:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.079 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:20:06.079 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:06.079 19:49:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:07.031 19:49:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:08.936 19:49:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@452 -- # prepare_net_devs 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # local -g is_hw=no 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # remove_spdk_ns 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # xtrace_disable 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # pci_devs=() 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -a pci_devs 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # pci_net_devs=() 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # pci_drivers=() 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -A pci_drivers 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@299 -- # net_devs=() 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@299 -- # local -ga net_devs 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@300 -- # e810=() 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@300 -- # local -ga e810 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # x722=() 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # local -ga x722 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # mlx=() 00:20:14.203 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # local -ga mlx 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:14.204 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:14.204 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # [[ up == up ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:14.204 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # [[ up == up ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:14.204 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # is_hw=yes 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:20:14.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:20:14.204 00:20:14.204 --- 10.0.0.2 ping statistics --- 00:20:14.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.204 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:20:14.204 00:20:14.204 --- 10.0.0.1 ping statistics --- 00:20:14.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.204 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # return 0 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:14.204 net.core.busy_poll = 1 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:14.204 net.core.busy_read = 1 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:14.204 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@725 -- # xtrace_disable 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@485 -- # nvmfpid=1217807 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@486 -- # waitforlisten 1217807 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@832 -- # '[' -z 1217807 ']' 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local max_retries=100 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@841 -- # xtrace_disable 00:20:14.205 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.205 [2024-07-24 19:49:31.443548] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:20:14.205 [2024-07-24 19:49:31.443646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.205 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.205 [2024-07-24 19:49:31.509173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.463 [2024-07-24 19:49:31.622498] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.463 [2024-07-24 19:49:31.622566] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.463 [2024-07-24 19:49:31.622587] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.463 [2024-07-24 19:49:31.622604] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.463 [2024-07-24 19:49:31.622617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.463 [2024-07-24 19:49:31.622706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.463 [2024-07-24 19:49:31.622773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.463 [2024-07-24 19:49:31.622846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.463 [2024-07-24 19:49:31.622839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@865 -- # return 0 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@731 -- # xtrace_disable 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:14.463 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.720 [2024-07-24 19:49:31.849067] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.720 Malloc1 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.720 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:14.721 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:14.721 [2024-07-24 19:49:31.902369] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.721 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:14.721 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1217953 00:20:14.721 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:20:14.721 19:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:14.721 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.630 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:20:16.630 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:16.630 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:16.630 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:16.630 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:20:16.630 "tick_rate": 2700000000, 00:20:16.630 "poll_groups": [ 00:20:16.630 { 00:20:16.630 "name": "nvmf_tgt_poll_group_000", 00:20:16.630 "admin_qpairs": 1, 00:20:16.630 "io_qpairs": 2, 00:20:16.630 "current_admin_qpairs": 1, 00:20:16.630 "current_io_qpairs": 2, 00:20:16.630 "pending_bdev_io": 0, 00:20:16.630 "completed_nvme_io": 26125, 00:20:16.630 "transports": [ 00:20:16.630 { 00:20:16.630 "trtype": "TCP" 00:20:16.630 } 00:20:16.630 ] 00:20:16.630 }, 00:20:16.630 { 00:20:16.630 "name": "nvmf_tgt_poll_group_001", 00:20:16.630 "admin_qpairs": 0, 00:20:16.630 "io_qpairs": 2, 00:20:16.630 "current_admin_qpairs": 0, 00:20:16.630 "current_io_qpairs": 2, 00:20:16.630 "pending_bdev_io": 0, 00:20:16.630 "completed_nvme_io": 25498, 00:20:16.630 "transports": [ 00:20:16.630 { 00:20:16.630 "trtype": "TCP" 00:20:16.630 } 00:20:16.630 ] 00:20:16.630 }, 00:20:16.630 { 00:20:16.630 "name": "nvmf_tgt_poll_group_002", 00:20:16.630 "admin_qpairs": 0, 00:20:16.630 "io_qpairs": 0, 00:20:16.630 "current_admin_qpairs": 0, 00:20:16.630 "current_io_qpairs": 0, 00:20:16.630 "pending_bdev_io": 0, 00:20:16.630 "completed_nvme_io": 0, 00:20:16.630 "transports": [ 00:20:16.630 { 00:20:16.630 "trtype": "TCP" 00:20:16.630 } 00:20:16.630 ] 00:20:16.630 }, 00:20:16.630 { 00:20:16.630 "name": "nvmf_tgt_poll_group_003", 00:20:16.630 "admin_qpairs": 0, 00:20:16.630 "io_qpairs": 0, 00:20:16.630 "current_admin_qpairs": 0, 00:20:16.630 "current_io_qpairs": 0, 00:20:16.630 "pending_bdev_io": 0, 00:20:16.630 "completed_nvme_io": 0, 00:20:16.630 "transports": [ 00:20:16.630 { 00:20:16.630 "trtype": "TCP" 00:20:16.630 } 00:20:16.630 ] 00:20:16.630 } 00:20:16.630 ] 00:20:16.630 }' 00:20:16.630 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:16.630 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:20:16.630 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:20:16.630 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:20:16.630 19:49:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1217953 00:20:24.735 Initializing NVMe Controllers 00:20:24.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:24.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:24.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:24.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:24.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:24.735 Initialization complete. Launching workers. 00:20:24.735 ======================================================== 00:20:24.735 Latency(us) 00:20:24.735 Device Information : IOPS MiB/s Average min max 00:20:24.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7227.10 28.23 8858.45 2083.55 53990.22 00:20:24.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6071.10 23.72 10545.97 1581.31 54556.19 00:20:24.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7387.50 28.86 8667.34 1693.07 55241.24 00:20:24.735 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6581.20 25.71 9728.96 1834.25 54306.75 00:20:24.735 ======================================================== 00:20:24.735 Total : 27266.89 106.51 9392.51 1581.31 55241.24 00:20:24.735 00:20:24.735 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:20:24.735 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # nvmfcleanup 00:20:24.735 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:24.735 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:24.736 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:24.736 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:24.736 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:24.736 rmmod nvme_tcp 00:20:24.736 rmmod nvme_fabrics 00:20:24.736 rmmod nvme_keyring 00:20:24.993 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:24.993 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:24.993 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:24.993 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # '[' -n 1217807 ']' 00:20:24.993 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # killprocess 1217807 00:20:24.993 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' -z 1217807 ']' 00:20:24.993 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # kill -0 1217807 00:20:24.993 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # uname 00:20:24.993 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:20:24.993 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1217807 00:20:24.993 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:20:24.993 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:20:24.993 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1217807' 00:20:24.993 killing process with pid 1217807 00:20:24.993 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # kill 1217807 00:20:24.993 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@975 -- # wait 1217807 00:20:25.252 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:20:25.252 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:20:25.252 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:20:25.252 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.252 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@282 -- # remove_spdk_ns 00:20:25.252 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.252 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.252 19:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.157 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:20:27.157 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:20:27.157 00:20:27.157 real 0m44.205s 00:20:27.157 user 2m39.415s 00:20:27.157 sys 0m9.479s 00:20:27.157 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # xtrace_disable 00:20:27.157 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:27.157 ************************************ 00:20:27.157 END TEST nvmf_perf_adq 00:20:27.157 ************************************ 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1108 -- # xtrace_disable 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:27.416 ************************************ 00:20:27.416 START TEST nvmf_shutdown 00:20:27.416 ************************************ 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:27.416 * Looking for test storage... 00:20:27.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:27.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1108 -- # xtrace_disable 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:27.416 ************************************ 00:20:27.416 START TEST nvmf_shutdown_tc1 00:20:27.416 ************************************ 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # nvmf_shutdown_tc1 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@452 -- # prepare_net_devs 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # local -g is_hw=no 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # remove_spdk_ns 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # xtrace_disable 00:20:27.416 19:49:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # pci_devs=() 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -a pci_devs 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # pci_net_devs=() 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # pci_drivers=() 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -A pci_drivers 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@299 -- # net_devs=() 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@299 -- # local -ga net_devs 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@300 -- # e810=() 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@300 -- # local -ga e810 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # x722=() 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # local -ga x722 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # mlx=() 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # local -ga mlx 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:29.317 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.317 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:29.318 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # [[ up == up ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:29.318 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # [[ up == up ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:29.318 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # is_hw=yes 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:20:29.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:20:29.318 00:20:29.318 --- 10.0.0.2 ping statistics --- 00:20:29.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.318 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:29.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:20:29.318 00:20:29.318 --- 10.0.0.1 ping statistics --- 00:20:29.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.318 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # return 0 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:20:29.318 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:20:29.577 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:29.577 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:20:29.577 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@725 -- # xtrace_disable 00:20:29.577 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.577 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@485 -- # nvmfpid=1221114 00:20:29.577 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:29.577 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@486 -- # waitforlisten 1221114 00:20:29.577 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # '[' -z 1221114 ']' 00:20:29.577 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.577 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local max_retries=100 00:20:29.577 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.577 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@841 -- # xtrace_disable 00:20:29.577 19:49:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.577 [2024-07-24 19:49:46.766930] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:20:29.577 [2024-07-24 19:49:46.767009] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.577 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.577 [2024-07-24 19:49:46.832492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:29.577 [2024-07-24 19:49:46.944981] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.577 [2024-07-24 19:49:46.945041] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.577 [2024-07-24 19:49:46.945056] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.577 [2024-07-24 19:49:46.945070] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.577 [2024-07-24 19:49:46.945081] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.577 [2024-07-24 19:49:46.945164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.577 [2024-07-24 19:49:46.945284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.577 [2024-07-24 19:49:46.945352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:29.577 [2024-07-24 19:49:46.945355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@865 -- # return 0 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@731 -- # xtrace_disable 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.509 [2024-07-24 19:49:47.779000] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@725 -- # xtrace_disable 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.509 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.510 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.510 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:30.510 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:30.510 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:30.510 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:30.510 19:49:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.510 Malloc1 00:20:30.510 [2024-07-24 19:49:47.868596] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.768 Malloc2 00:20:30.768 Malloc3 00:20:30.768 Malloc4 00:20:30.768 Malloc5 00:20:30.768 Malloc6 00:20:30.768 Malloc7 00:20:31.027 Malloc8 00:20:31.027 Malloc9 00:20:31.027 Malloc10 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@731 -- # xtrace_disable 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1221297 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1221297 /var/tmp/bdevperf.sock 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # '[' -z 1221297 ']' 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local max_retries=100 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@536 -- # config=() 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:31.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@536 -- # local subsystem config 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@841 -- # xtrace_disable 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:31.027 { 00:20:31.027 "params": { 00:20:31.027 "name": "Nvme$subsystem", 00:20:31.027 "trtype": "$TEST_TRANSPORT", 00:20:31.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.027 "adrfam": "ipv4", 00:20:31.027 "trsvcid": "$NVMF_PORT", 00:20:31.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.027 "hdgst": ${hdgst:-false}, 00:20:31.027 "ddgst": ${ddgst:-false} 00:20:31.027 }, 00:20:31.027 "method": "bdev_nvme_attach_controller" 00:20:31.027 } 00:20:31.027 EOF 00:20:31.027 )") 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:31.027 { 00:20:31.027 "params": { 00:20:31.027 "name": "Nvme$subsystem", 00:20:31.027 "trtype": "$TEST_TRANSPORT", 00:20:31.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.027 "adrfam": "ipv4", 00:20:31.027 "trsvcid": "$NVMF_PORT", 00:20:31.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.027 "hdgst": ${hdgst:-false}, 00:20:31.027 "ddgst": ${ddgst:-false} 00:20:31.027 }, 00:20:31.027 "method": "bdev_nvme_attach_controller" 00:20:31.027 } 00:20:31.027 EOF 00:20:31.027 )") 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:31.027 { 00:20:31.027 "params": { 00:20:31.027 "name": "Nvme$subsystem", 00:20:31.027 "trtype": "$TEST_TRANSPORT", 00:20:31.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.027 "adrfam": "ipv4", 00:20:31.027 "trsvcid": "$NVMF_PORT", 00:20:31.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.027 "hdgst": ${hdgst:-false}, 00:20:31.027 "ddgst": ${ddgst:-false} 00:20:31.027 }, 00:20:31.027 "method": "bdev_nvme_attach_controller" 00:20:31.027 } 00:20:31.027 EOF 00:20:31.027 )") 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:31.027 { 00:20:31.027 "params": { 00:20:31.027 "name": "Nvme$subsystem", 00:20:31.027 "trtype": "$TEST_TRANSPORT", 00:20:31.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.027 "adrfam": "ipv4", 00:20:31.027 "trsvcid": "$NVMF_PORT", 00:20:31.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.027 "hdgst": ${hdgst:-false}, 00:20:31.027 "ddgst": ${ddgst:-false} 00:20:31.027 }, 00:20:31.027 "method": "bdev_nvme_attach_controller" 00:20:31.027 } 00:20:31.027 EOF 00:20:31.027 )") 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:31.027 { 00:20:31.027 "params": { 00:20:31.027 "name": "Nvme$subsystem", 00:20:31.027 "trtype": "$TEST_TRANSPORT", 00:20:31.027 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.027 "adrfam": "ipv4", 00:20:31.027 "trsvcid": "$NVMF_PORT", 00:20:31.027 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.027 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.027 "hdgst": ${hdgst:-false}, 00:20:31.027 "ddgst": ${ddgst:-false} 00:20:31.027 }, 00:20:31.027 "method": "bdev_nvme_attach_controller" 00:20:31.027 } 00:20:31.027 EOF 00:20:31.027 )") 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:31.027 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:31.027 { 00:20:31.027 "params": { 00:20:31.028 "name": "Nvme$subsystem", 00:20:31.028 "trtype": "$TEST_TRANSPORT", 00:20:31.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.028 "adrfam": "ipv4", 00:20:31.028 "trsvcid": "$NVMF_PORT", 00:20:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.028 "hdgst": ${hdgst:-false}, 00:20:31.028 "ddgst": ${ddgst:-false} 00:20:31.028 }, 00:20:31.028 "method": "bdev_nvme_attach_controller" 00:20:31.028 } 00:20:31.028 EOF 00:20:31.028 )") 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:31.028 { 00:20:31.028 "params": { 00:20:31.028 "name": "Nvme$subsystem", 00:20:31.028 "trtype": "$TEST_TRANSPORT", 00:20:31.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.028 "adrfam": "ipv4", 00:20:31.028 "trsvcid": "$NVMF_PORT", 00:20:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.028 "hdgst": ${hdgst:-false}, 00:20:31.028 "ddgst": ${ddgst:-false} 00:20:31.028 }, 00:20:31.028 "method": "bdev_nvme_attach_controller" 00:20:31.028 } 00:20:31.028 EOF 00:20:31.028 )") 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:31.028 { 00:20:31.028 "params": { 00:20:31.028 "name": "Nvme$subsystem", 00:20:31.028 "trtype": "$TEST_TRANSPORT", 00:20:31.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.028 "adrfam": "ipv4", 00:20:31.028 "trsvcid": "$NVMF_PORT", 00:20:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.028 "hdgst": ${hdgst:-false}, 00:20:31.028 "ddgst": ${ddgst:-false} 00:20:31.028 }, 00:20:31.028 "method": "bdev_nvme_attach_controller" 00:20:31.028 } 00:20:31.028 EOF 00:20:31.028 )") 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:31.028 { 00:20:31.028 "params": { 00:20:31.028 "name": "Nvme$subsystem", 00:20:31.028 "trtype": "$TEST_TRANSPORT", 00:20:31.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.028 "adrfam": "ipv4", 00:20:31.028 "trsvcid": "$NVMF_PORT", 00:20:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.028 "hdgst": ${hdgst:-false}, 00:20:31.028 "ddgst": ${ddgst:-false} 00:20:31.028 }, 00:20:31.028 "method": "bdev_nvme_attach_controller" 00:20:31.028 } 00:20:31.028 EOF 00:20:31.028 )") 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:31.028 { 00:20:31.028 "params": { 00:20:31.028 "name": "Nvme$subsystem", 00:20:31.028 "trtype": "$TEST_TRANSPORT", 00:20:31.028 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:31.028 "adrfam": "ipv4", 00:20:31.028 "trsvcid": "$NVMF_PORT", 00:20:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:31.028 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:31.028 "hdgst": ${hdgst:-false}, 00:20:31.028 "ddgst": ${ddgst:-false} 00:20:31.028 }, 00:20:31.028 "method": "bdev_nvme_attach_controller" 00:20:31.028 } 00:20:31.028 EOF 00:20:31.028 )") 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # jq . 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@561 -- # IFS=, 00:20:31.028 19:49:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:20:31.028 "params": { 00:20:31.028 "name": "Nvme1", 00:20:31.028 "trtype": "tcp", 00:20:31.028 "traddr": "10.0.0.2", 00:20:31.028 "adrfam": "ipv4", 00:20:31.028 "trsvcid": "4420", 00:20:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:31.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:31.028 "hdgst": false, 00:20:31.028 "ddgst": false 00:20:31.028 }, 00:20:31.028 "method": "bdev_nvme_attach_controller" 00:20:31.028 },{ 00:20:31.028 "params": { 00:20:31.028 "name": "Nvme2", 00:20:31.028 "trtype": "tcp", 00:20:31.028 "traddr": "10.0.0.2", 00:20:31.028 "adrfam": "ipv4", 00:20:31.028 "trsvcid": "4420", 00:20:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:31.028 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:31.028 "hdgst": false, 00:20:31.028 "ddgst": false 00:20:31.028 }, 00:20:31.028 "method": "bdev_nvme_attach_controller" 00:20:31.028 },{ 00:20:31.028 "params": { 00:20:31.028 "name": "Nvme3", 00:20:31.028 "trtype": "tcp", 00:20:31.028 "traddr": "10.0.0.2", 00:20:31.028 "adrfam": "ipv4", 00:20:31.028 "trsvcid": "4420", 00:20:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:31.028 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:31.028 "hdgst": false, 00:20:31.028 "ddgst": false 00:20:31.028 }, 00:20:31.028 "method": "bdev_nvme_attach_controller" 00:20:31.028 },{ 00:20:31.028 "params": { 00:20:31.028 "name": "Nvme4", 00:20:31.028 "trtype": "tcp", 00:20:31.028 "traddr": "10.0.0.2", 00:20:31.028 "adrfam": "ipv4", 00:20:31.028 "trsvcid": "4420", 00:20:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:31.028 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:31.028 "hdgst": false, 00:20:31.028 "ddgst": false 00:20:31.028 }, 00:20:31.028 "method": "bdev_nvme_attach_controller" 00:20:31.028 },{ 00:20:31.028 "params": { 00:20:31.028 "name": "Nvme5", 00:20:31.028 "trtype": "tcp", 00:20:31.028 "traddr": "10.0.0.2", 00:20:31.028 "adrfam": "ipv4", 00:20:31.028 "trsvcid": "4420", 00:20:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:31.028 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:31.028 "hdgst": false, 00:20:31.028 "ddgst": false 00:20:31.028 }, 00:20:31.028 "method": "bdev_nvme_attach_controller" 00:20:31.028 },{ 00:20:31.028 "params": { 00:20:31.028 "name": "Nvme6", 00:20:31.028 "trtype": "tcp", 00:20:31.028 "traddr": "10.0.0.2", 00:20:31.028 "adrfam": "ipv4", 00:20:31.028 "trsvcid": "4420", 00:20:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:31.028 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:31.028 "hdgst": false, 00:20:31.028 "ddgst": false 00:20:31.028 }, 00:20:31.028 "method": "bdev_nvme_attach_controller" 00:20:31.028 },{ 00:20:31.028 "params": { 00:20:31.028 "name": "Nvme7", 00:20:31.028 "trtype": "tcp", 00:20:31.028 "traddr": "10.0.0.2", 00:20:31.028 "adrfam": "ipv4", 00:20:31.028 "trsvcid": "4420", 00:20:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:31.028 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:31.028 "hdgst": false, 00:20:31.028 "ddgst": false 00:20:31.028 }, 00:20:31.028 "method": "bdev_nvme_attach_controller" 00:20:31.028 },{ 00:20:31.028 "params": { 00:20:31.028 "name": "Nvme8", 00:20:31.028 "trtype": "tcp", 00:20:31.028 "traddr": "10.0.0.2", 00:20:31.028 "adrfam": "ipv4", 00:20:31.028 "trsvcid": "4420", 00:20:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:31.028 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:31.028 "hdgst": false, 00:20:31.028 "ddgst": false 00:20:31.028 }, 00:20:31.028 "method": "bdev_nvme_attach_controller" 00:20:31.028 },{ 00:20:31.028 "params": { 00:20:31.028 "name": "Nvme9", 00:20:31.028 "trtype": "tcp", 00:20:31.028 "traddr": "10.0.0.2", 00:20:31.028 "adrfam": "ipv4", 00:20:31.028 "trsvcid": "4420", 00:20:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:31.028 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:31.028 "hdgst": false, 00:20:31.028 "ddgst": false 00:20:31.028 }, 00:20:31.028 "method": "bdev_nvme_attach_controller" 00:20:31.028 },{ 00:20:31.028 "params": { 00:20:31.028 "name": "Nvme10", 00:20:31.028 "trtype": "tcp", 00:20:31.028 "traddr": "10.0.0.2", 00:20:31.028 "adrfam": "ipv4", 00:20:31.028 "trsvcid": "4420", 00:20:31.028 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:31.028 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:31.028 "hdgst": false, 00:20:31.028 "ddgst": false 00:20:31.028 }, 00:20:31.028 "method": "bdev_nvme_attach_controller" 00:20:31.028 }' 00:20:31.028 [2024-07-24 19:49:48.366091] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:20:31.029 [2024-07-24 19:49:48.366170] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:31.029 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.319 [2024-07-24 19:49:48.430834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.319 [2024-07-24 19:49:48.541475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.219 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:20:33.220 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@865 -- # return 0 00:20:33.220 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:33.220 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:33.220 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:33.220 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:33.220 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1221297 00:20:33.220 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:33.220 19:49:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:34.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1221297 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1221114 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@536 -- # config=() 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@536 -- # local subsystem config 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:34.153 { 00:20:34.153 "params": { 00:20:34.153 "name": "Nvme$subsystem", 00:20:34.153 "trtype": "$TEST_TRANSPORT", 00:20:34.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.153 "adrfam": "ipv4", 00:20:34.153 "trsvcid": "$NVMF_PORT", 00:20:34.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.153 "hdgst": ${hdgst:-false}, 00:20:34.153 "ddgst": ${ddgst:-false} 00:20:34.153 }, 00:20:34.153 "method": "bdev_nvme_attach_controller" 00:20:34.153 } 00:20:34.153 EOF 00:20:34.153 )") 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:34.153 { 00:20:34.153 "params": { 00:20:34.153 "name": "Nvme$subsystem", 00:20:34.153 "trtype": "$TEST_TRANSPORT", 00:20:34.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.153 "adrfam": "ipv4", 00:20:34.153 "trsvcid": "$NVMF_PORT", 00:20:34.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.153 "hdgst": ${hdgst:-false}, 00:20:34.153 "ddgst": ${ddgst:-false} 00:20:34.153 }, 00:20:34.153 "method": "bdev_nvme_attach_controller" 00:20:34.153 } 00:20:34.153 EOF 00:20:34.153 )") 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:34.153 { 00:20:34.153 "params": { 00:20:34.153 "name": "Nvme$subsystem", 00:20:34.153 "trtype": "$TEST_TRANSPORT", 00:20:34.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.153 "adrfam": "ipv4", 00:20:34.153 "trsvcid": "$NVMF_PORT", 00:20:34.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.153 "hdgst": ${hdgst:-false}, 00:20:34.153 "ddgst": ${ddgst:-false} 00:20:34.153 }, 00:20:34.153 "method": "bdev_nvme_attach_controller" 00:20:34.153 } 00:20:34.153 EOF 00:20:34.153 )") 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:34.153 { 00:20:34.153 "params": { 00:20:34.153 "name": "Nvme$subsystem", 00:20:34.153 "trtype": "$TEST_TRANSPORT", 00:20:34.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.153 "adrfam": "ipv4", 00:20:34.153 "trsvcid": "$NVMF_PORT", 00:20:34.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.153 "hdgst": ${hdgst:-false}, 00:20:34.153 "ddgst": ${ddgst:-false} 00:20:34.153 }, 00:20:34.153 "method": "bdev_nvme_attach_controller" 00:20:34.153 } 00:20:34.153 EOF 00:20:34.153 )") 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:34.153 { 00:20:34.153 "params": { 00:20:34.153 "name": "Nvme$subsystem", 00:20:34.153 "trtype": "$TEST_TRANSPORT", 00:20:34.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.153 "adrfam": "ipv4", 00:20:34.153 "trsvcid": "$NVMF_PORT", 00:20:34.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.153 "hdgst": ${hdgst:-false}, 00:20:34.153 "ddgst": ${ddgst:-false} 00:20:34.153 }, 00:20:34.153 "method": "bdev_nvme_attach_controller" 00:20:34.153 } 00:20:34.153 EOF 00:20:34.153 )") 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:34.153 { 00:20:34.153 "params": { 00:20:34.153 "name": "Nvme$subsystem", 00:20:34.153 "trtype": "$TEST_TRANSPORT", 00:20:34.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.153 "adrfam": "ipv4", 00:20:34.153 "trsvcid": "$NVMF_PORT", 00:20:34.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.153 "hdgst": ${hdgst:-false}, 00:20:34.153 "ddgst": ${ddgst:-false} 00:20:34.153 }, 00:20:34.153 "method": "bdev_nvme_attach_controller" 00:20:34.153 } 00:20:34.153 EOF 00:20:34.153 )") 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:34.153 { 00:20:34.153 "params": { 00:20:34.153 "name": "Nvme$subsystem", 00:20:34.153 "trtype": "$TEST_TRANSPORT", 00:20:34.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.153 "adrfam": "ipv4", 00:20:34.153 "trsvcid": "$NVMF_PORT", 00:20:34.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.153 "hdgst": ${hdgst:-false}, 00:20:34.153 "ddgst": ${ddgst:-false} 00:20:34.153 }, 00:20:34.153 "method": "bdev_nvme_attach_controller" 00:20:34.153 } 00:20:34.153 EOF 00:20:34.153 )") 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:34.153 { 00:20:34.153 "params": { 00:20:34.153 "name": "Nvme$subsystem", 00:20:34.153 "trtype": "$TEST_TRANSPORT", 00:20:34.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.153 "adrfam": "ipv4", 00:20:34.153 "trsvcid": "$NVMF_PORT", 00:20:34.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.153 "hdgst": ${hdgst:-false}, 00:20:34.153 "ddgst": ${ddgst:-false} 00:20:34.153 }, 00:20:34.153 "method": "bdev_nvme_attach_controller" 00:20:34.153 } 00:20:34.153 EOF 00:20:34.153 )") 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:34.153 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:34.153 { 00:20:34.153 "params": { 00:20:34.153 "name": "Nvme$subsystem", 00:20:34.153 "trtype": "$TEST_TRANSPORT", 00:20:34.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.153 "adrfam": "ipv4", 00:20:34.153 "trsvcid": "$NVMF_PORT", 00:20:34.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.154 "hdgst": ${hdgst:-false}, 00:20:34.154 "ddgst": ${ddgst:-false} 00:20:34.154 }, 00:20:34.154 "method": "bdev_nvme_attach_controller" 00:20:34.154 } 00:20:34.154 EOF 00:20:34.154 )") 00:20:34.154 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:34.154 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:34.154 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:34.154 { 00:20:34.154 "params": { 00:20:34.154 "name": "Nvme$subsystem", 00:20:34.154 "trtype": "$TEST_TRANSPORT", 00:20:34.154 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:34.154 "adrfam": "ipv4", 00:20:34.154 "trsvcid": "$NVMF_PORT", 00:20:34.154 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:34.154 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:34.154 "hdgst": ${hdgst:-false}, 00:20:34.154 "ddgst": ${ddgst:-false} 00:20:34.154 }, 00:20:34.154 "method": "bdev_nvme_attach_controller" 00:20:34.154 } 00:20:34.154 EOF 00:20:34.154 )") 00:20:34.154 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # cat 00:20:34.154 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # jq . 00:20:34.154 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@561 -- # IFS=, 00:20:34.154 19:49:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:20:34.154 "params": { 00:20:34.154 "name": "Nvme1", 00:20:34.154 "trtype": "tcp", 00:20:34.154 "traddr": "10.0.0.2", 00:20:34.154 "adrfam": "ipv4", 00:20:34.154 "trsvcid": "4420", 00:20:34.154 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.154 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.154 "hdgst": false, 00:20:34.154 "ddgst": false 00:20:34.154 }, 00:20:34.154 "method": "bdev_nvme_attach_controller" 00:20:34.154 },{ 00:20:34.154 "params": { 00:20:34.154 "name": "Nvme2", 00:20:34.154 "trtype": "tcp", 00:20:34.154 "traddr": "10.0.0.2", 00:20:34.154 "adrfam": "ipv4", 00:20:34.154 "trsvcid": "4420", 00:20:34.154 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:34.154 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:34.154 "hdgst": false, 00:20:34.154 "ddgst": false 00:20:34.154 }, 00:20:34.154 "method": "bdev_nvme_attach_controller" 00:20:34.154 },{ 00:20:34.154 "params": { 00:20:34.154 "name": "Nvme3", 00:20:34.154 "trtype": "tcp", 00:20:34.154 "traddr": "10.0.0.2", 00:20:34.154 "adrfam": "ipv4", 00:20:34.154 "trsvcid": "4420", 00:20:34.154 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:34.154 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:34.154 "hdgst": false, 00:20:34.154 "ddgst": false 00:20:34.154 }, 00:20:34.154 "method": "bdev_nvme_attach_controller" 00:20:34.154 },{ 00:20:34.154 "params": { 00:20:34.154 "name": "Nvme4", 00:20:34.154 "trtype": "tcp", 00:20:34.154 "traddr": "10.0.0.2", 00:20:34.154 "adrfam": "ipv4", 00:20:34.154 "trsvcid": "4420", 00:20:34.154 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:34.154 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:34.154 "hdgst": false, 00:20:34.154 "ddgst": false 00:20:34.154 }, 00:20:34.154 "method": "bdev_nvme_attach_controller" 00:20:34.154 },{ 00:20:34.154 "params": { 00:20:34.154 "name": "Nvme5", 00:20:34.154 "trtype": "tcp", 00:20:34.154 "traddr": "10.0.0.2", 00:20:34.154 "adrfam": "ipv4", 00:20:34.154 "trsvcid": "4420", 00:20:34.154 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:34.154 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:34.154 "hdgst": false, 00:20:34.154 "ddgst": false 00:20:34.154 }, 00:20:34.154 "method": "bdev_nvme_attach_controller" 00:20:34.154 },{ 00:20:34.154 "params": { 00:20:34.154 "name": "Nvme6", 00:20:34.154 "trtype": "tcp", 00:20:34.154 "traddr": "10.0.0.2", 00:20:34.154 "adrfam": "ipv4", 00:20:34.154 "trsvcid": "4420", 00:20:34.154 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:34.154 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:34.154 "hdgst": false, 00:20:34.154 "ddgst": false 00:20:34.154 }, 00:20:34.154 "method": "bdev_nvme_attach_controller" 00:20:34.154 },{ 00:20:34.154 "params": { 00:20:34.154 "name": "Nvme7", 00:20:34.154 "trtype": "tcp", 00:20:34.154 "traddr": "10.0.0.2", 00:20:34.154 "adrfam": "ipv4", 00:20:34.154 "trsvcid": "4420", 00:20:34.154 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:34.154 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:34.154 "hdgst": false, 00:20:34.154 "ddgst": false 00:20:34.154 }, 00:20:34.154 "method": "bdev_nvme_attach_controller" 00:20:34.154 },{ 00:20:34.154 "params": { 00:20:34.154 "name": "Nvme8", 00:20:34.154 "trtype": "tcp", 00:20:34.154 "traddr": "10.0.0.2", 00:20:34.154 "adrfam": "ipv4", 00:20:34.154 "trsvcid": "4420", 00:20:34.154 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:34.154 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:34.154 "hdgst": false, 00:20:34.154 "ddgst": false 00:20:34.154 }, 00:20:34.154 "method": "bdev_nvme_attach_controller" 00:20:34.154 },{ 00:20:34.154 "params": { 00:20:34.154 "name": "Nvme9", 00:20:34.154 "trtype": "tcp", 00:20:34.154 "traddr": "10.0.0.2", 00:20:34.154 "adrfam": "ipv4", 00:20:34.154 "trsvcid": "4420", 00:20:34.154 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:34.154 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:34.154 "hdgst": false, 00:20:34.154 "ddgst": false 00:20:34.154 }, 00:20:34.154 "method": "bdev_nvme_attach_controller" 00:20:34.154 },{ 00:20:34.154 "params": { 00:20:34.154 "name": "Nvme10", 00:20:34.154 "trtype": "tcp", 00:20:34.154 "traddr": "10.0.0.2", 00:20:34.154 "adrfam": "ipv4", 00:20:34.154 "trsvcid": "4420", 00:20:34.154 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:34.154 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:34.154 "hdgst": false, 00:20:34.154 "ddgst": false 00:20:34.154 }, 00:20:34.154 "method": "bdev_nvme_attach_controller" 00:20:34.154 }' 00:20:34.154 [2024-07-24 19:49:51.380863] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:20:34.154 [2024-07-24 19:49:51.380952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221717 ] 00:20:34.154 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.154 [2024-07-24 19:49:51.447611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.412 [2024-07-24 19:49:51.557615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.790 Running I/O for 1 seconds... 00:20:37.161 00:20:37.161 Latency(us) 00:20:37.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.161 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.161 Verification LBA range: start 0x0 length 0x400 00:20:37.161 Nvme1n1 : 1.05 183.41 11.46 0.00 0.00 345234.65 25243.50 299815.06 00:20:37.161 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.161 Verification LBA range: start 0x0 length 0x400 00:20:37.161 Nvme2n1 : 1.05 243.01 15.19 0.00 0.00 255462.59 18350.08 254765.13 00:20:37.161 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.161 Verification LBA range: start 0x0 length 0x400 00:20:37.161 Nvme3n1 : 1.15 281.43 17.59 0.00 0.00 217117.89 7281.78 250104.79 00:20:37.161 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.161 Verification LBA range: start 0x0 length 0x400 00:20:37.161 Nvme4n1 : 1.09 240.00 15.00 0.00 0.00 248356.52 8980.86 250104.79 00:20:37.161 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.161 Verification LBA range: start 0x0 length 0x400 00:20:37.161 Nvme5n1 : 1.18 216.29 13.52 0.00 0.00 274807.28 23301.69 271853.04 00:20:37.162 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.162 Verification LBA range: start 0x0 length 0x400 00:20:37.162 Nvme6n1 : 1.17 223.00 13.94 0.00 0.00 251773.25 18738.44 265639.25 00:20:37.162 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.162 Verification LBA range: start 0x0 length 0x400 00:20:37.162 Nvme7n1 : 1.11 230.05 14.38 0.00 0.00 248033.09 20583.16 254765.13 00:20:37.162 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.162 Verification LBA range: start 0x0 length 0x400 00:20:37.162 Nvme8n1 : 1.16 280.27 17.52 0.00 0.00 200738.39 16505.36 243891.01 00:20:37.162 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.162 Verification LBA range: start 0x0 length 0x400 00:20:37.162 Nvme9n1 : 1.18 217.38 13.59 0.00 0.00 254737.83 14369.37 310689.19 00:20:37.162 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:37.162 Verification LBA range: start 0x0 length 0x400 00:20:37.162 Nvme10n1 : 1.18 272.29 17.02 0.00 0.00 199761.24 11845.03 253211.69 00:20:37.162 =================================================================================================================== 00:20:37.162 Total : 2387.14 149.20 0.00 0.00 244116.31 7281.78 310689.19 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # nvmfcleanup 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:37.162 rmmod nvme_tcp 00:20:37.162 rmmod nvme_fabrics 00:20:37.162 rmmod nvme_keyring 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # '[' -n 1221114 ']' 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # killprocess 1221114 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' -z 1221114 ']' 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # kill -0 1221114 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # uname 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:20:37.162 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1221114 00:20:37.419 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:20:37.419 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:20:37.419 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1221114' 00:20:37.419 killing process with pid 1221114 00:20:37.419 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # kill 1221114 00:20:37.419 19:49:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@975 -- # wait 1221114 00:20:37.985 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:20:37.985 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:20:37.985 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:20:37.985 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:37.985 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@282 -- # remove_spdk_ns 00:20:37.985 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.985 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.985 19:49:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:20:39.886 00:20:39.886 real 0m12.480s 00:20:39.886 user 0m37.219s 00:20:39.886 sys 0m3.214s 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # xtrace_disable 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:39.886 ************************************ 00:20:39.886 END TEST nvmf_shutdown_tc1 00:20:39.886 ************************************ 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1108 -- # xtrace_disable 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:39.886 ************************************ 00:20:39.886 START TEST nvmf_shutdown_tc2 00:20:39.886 ************************************ 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # nvmf_shutdown_tc2 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@452 -- # prepare_net_devs 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # local -g is_hw=no 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # remove_spdk_ns 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # xtrace_disable 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # pci_devs=() 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -a pci_devs 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # pci_net_devs=() 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # pci_drivers=() 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -A pci_drivers 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@299 -- # net_devs=() 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@299 -- # local -ga net_devs 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@300 -- # e810=() 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@300 -- # local -ga e810 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # x722=() 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # local -ga x722 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # mlx=() 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # local -ga mlx 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:20:39.886 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:39.887 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:39.887 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # [[ up == up ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:39.887 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # [[ up == up ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:39.887 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # is_hw=yes 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:20:39.887 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:40.145 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:40.145 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:40.145 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:20:40.145 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:40.145 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:40.145 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:40.145 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:20:40.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:20:40.145 00:20:40.145 --- 10.0.0.2 ping statistics --- 00:20:40.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.145 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:20:40.145 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:40.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:20:40.145 00:20:40.145 --- 10.0.0.1 ping statistics --- 00:20:40.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.145 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:20:40.145 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.145 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # return 0 00:20:40.145 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:20:40.145 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@725 -- # xtrace_disable 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@485 -- # nvmfpid=1222487 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@486 -- # waitforlisten 1222487 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # '[' -z 1222487 ']' 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local max_retries=100 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@841 -- # xtrace_disable 00:20:40.146 19:49:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.146 [2024-07-24 19:49:57.433345] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:20:40.146 [2024-07-24 19:49:57.433429] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.146 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.146 [2024-07-24 19:49:57.504275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.404 [2024-07-24 19:49:57.617131] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.404 [2024-07-24 19:49:57.617184] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.404 [2024-07-24 19:49:57.617198] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.404 [2024-07-24 19:49:57.617210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.404 [2024-07-24 19:49:57.617234] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.404 [2024-07-24 19:49:57.617332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.404 [2024-07-24 19:49:57.617387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.404 [2024-07-24 19:49:57.617436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:40.404 [2024-07-24 19:49:57.617439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@865 -- # return 0 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@731 -- # xtrace_disable 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.335 [2024-07-24 19:49:58.399514] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@725 -- # xtrace_disable 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.335 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:41.336 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.336 Malloc1 00:20:41.336 [2024-07-24 19:49:58.474494] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.336 Malloc2 00:20:41.336 Malloc3 00:20:41.336 Malloc4 00:20:41.336 Malloc5 00:20:41.336 Malloc6 00:20:41.594 Malloc7 00:20:41.594 Malloc8 00:20:41.594 Malloc9 00:20:41.594 Malloc10 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@731 -- # xtrace_disable 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1222799 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1222799 /var/tmp/bdevperf.sock 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # '[' -z 1222799 ']' 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local max_retries=100 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@536 -- # config=() 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@536 -- # local subsystem config 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@841 -- # xtrace_disable 00:20:41.594 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:41.595 { 00:20:41.595 "params": { 00:20:41.595 "name": "Nvme$subsystem", 00:20:41.595 "trtype": "$TEST_TRANSPORT", 00:20:41.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.595 "adrfam": "ipv4", 00:20:41.595 "trsvcid": "$NVMF_PORT", 00:20:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.595 "hdgst": ${hdgst:-false}, 00:20:41.595 "ddgst": ${ddgst:-false} 00:20:41.595 }, 00:20:41.595 "method": "bdev_nvme_attach_controller" 00:20:41.595 } 00:20:41.595 EOF 00:20:41.595 )") 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # cat 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:41.595 { 00:20:41.595 "params": { 00:20:41.595 "name": "Nvme$subsystem", 00:20:41.595 "trtype": "$TEST_TRANSPORT", 00:20:41.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.595 "adrfam": "ipv4", 00:20:41.595 "trsvcid": "$NVMF_PORT", 00:20:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.595 "hdgst": ${hdgst:-false}, 00:20:41.595 "ddgst": ${ddgst:-false} 00:20:41.595 }, 00:20:41.595 "method": "bdev_nvme_attach_controller" 00:20:41.595 } 00:20:41.595 EOF 00:20:41.595 )") 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # cat 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:41.595 { 00:20:41.595 "params": { 00:20:41.595 "name": "Nvme$subsystem", 00:20:41.595 "trtype": "$TEST_TRANSPORT", 00:20:41.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.595 "adrfam": "ipv4", 00:20:41.595 "trsvcid": "$NVMF_PORT", 00:20:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.595 "hdgst": ${hdgst:-false}, 00:20:41.595 "ddgst": ${ddgst:-false} 00:20:41.595 }, 00:20:41.595 "method": "bdev_nvme_attach_controller" 00:20:41.595 } 00:20:41.595 EOF 00:20:41.595 )") 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # cat 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:41.595 { 00:20:41.595 "params": { 00:20:41.595 "name": "Nvme$subsystem", 00:20:41.595 "trtype": "$TEST_TRANSPORT", 00:20:41.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.595 "adrfam": "ipv4", 00:20:41.595 "trsvcid": "$NVMF_PORT", 00:20:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.595 "hdgst": ${hdgst:-false}, 00:20:41.595 "ddgst": ${ddgst:-false} 00:20:41.595 }, 00:20:41.595 "method": "bdev_nvme_attach_controller" 00:20:41.595 } 00:20:41.595 EOF 00:20:41.595 )") 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # cat 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:41.595 { 00:20:41.595 "params": { 00:20:41.595 "name": "Nvme$subsystem", 00:20:41.595 "trtype": "$TEST_TRANSPORT", 00:20:41.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.595 "adrfam": "ipv4", 00:20:41.595 "trsvcid": "$NVMF_PORT", 00:20:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.595 "hdgst": ${hdgst:-false}, 00:20:41.595 "ddgst": ${ddgst:-false} 00:20:41.595 }, 00:20:41.595 "method": "bdev_nvme_attach_controller" 00:20:41.595 } 00:20:41.595 EOF 00:20:41.595 )") 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # cat 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:41.595 { 00:20:41.595 "params": { 00:20:41.595 "name": "Nvme$subsystem", 00:20:41.595 "trtype": "$TEST_TRANSPORT", 00:20:41.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.595 "adrfam": "ipv4", 00:20:41.595 "trsvcid": "$NVMF_PORT", 00:20:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.595 "hdgst": ${hdgst:-false}, 00:20:41.595 "ddgst": ${ddgst:-false} 00:20:41.595 }, 00:20:41.595 "method": "bdev_nvme_attach_controller" 00:20:41.595 } 00:20:41.595 EOF 00:20:41.595 )") 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # cat 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:41.595 { 00:20:41.595 "params": { 00:20:41.595 "name": "Nvme$subsystem", 00:20:41.595 "trtype": "$TEST_TRANSPORT", 00:20:41.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.595 "adrfam": "ipv4", 00:20:41.595 "trsvcid": "$NVMF_PORT", 00:20:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.595 "hdgst": ${hdgst:-false}, 00:20:41.595 "ddgst": ${ddgst:-false} 00:20:41.595 }, 00:20:41.595 "method": "bdev_nvme_attach_controller" 00:20:41.595 } 00:20:41.595 EOF 00:20:41.595 )") 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # cat 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:41.595 { 00:20:41.595 "params": { 00:20:41.595 "name": "Nvme$subsystem", 00:20:41.595 "trtype": "$TEST_TRANSPORT", 00:20:41.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.595 "adrfam": "ipv4", 00:20:41.595 "trsvcid": "$NVMF_PORT", 00:20:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.595 "hdgst": ${hdgst:-false}, 00:20:41.595 "ddgst": ${ddgst:-false} 00:20:41.595 }, 00:20:41.595 "method": "bdev_nvme_attach_controller" 00:20:41.595 } 00:20:41.595 EOF 00:20:41.595 )") 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # cat 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:41.595 { 00:20:41.595 "params": { 00:20:41.595 "name": "Nvme$subsystem", 00:20:41.595 "trtype": "$TEST_TRANSPORT", 00:20:41.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.595 "adrfam": "ipv4", 00:20:41.595 "trsvcid": "$NVMF_PORT", 00:20:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.595 "hdgst": ${hdgst:-false}, 00:20:41.595 "ddgst": ${ddgst:-false} 00:20:41.595 }, 00:20:41.595 "method": "bdev_nvme_attach_controller" 00:20:41.595 } 00:20:41.595 EOF 00:20:41.595 )") 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # cat 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:41.595 { 00:20:41.595 "params": { 00:20:41.595 "name": "Nvme$subsystem", 00:20:41.595 "trtype": "$TEST_TRANSPORT", 00:20:41.595 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.595 "adrfam": "ipv4", 00:20:41.595 "trsvcid": "$NVMF_PORT", 00:20:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.595 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.595 "hdgst": ${hdgst:-false}, 00:20:41.595 "ddgst": ${ddgst:-false} 00:20:41.595 }, 00:20:41.595 "method": "bdev_nvme_attach_controller" 00:20:41.595 } 00:20:41.595 EOF 00:20:41.595 )") 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # cat 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # jq . 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@561 -- # IFS=, 00:20:41.595 19:49:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:20:41.595 "params": { 00:20:41.595 "name": "Nvme1", 00:20:41.595 "trtype": "tcp", 00:20:41.595 "traddr": "10.0.0.2", 00:20:41.595 "adrfam": "ipv4", 00:20:41.595 "trsvcid": "4420", 00:20:41.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.595 "hdgst": false, 00:20:41.595 "ddgst": false 00:20:41.595 }, 00:20:41.596 "method": "bdev_nvme_attach_controller" 00:20:41.596 },{ 00:20:41.596 "params": { 00:20:41.596 "name": "Nvme2", 00:20:41.596 "trtype": "tcp", 00:20:41.596 "traddr": "10.0.0.2", 00:20:41.596 "adrfam": "ipv4", 00:20:41.596 "trsvcid": "4420", 00:20:41.596 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:41.596 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:41.596 "hdgst": false, 00:20:41.596 "ddgst": false 00:20:41.596 }, 00:20:41.596 "method": "bdev_nvme_attach_controller" 00:20:41.596 },{ 00:20:41.596 "params": { 00:20:41.596 "name": "Nvme3", 00:20:41.596 "trtype": "tcp", 00:20:41.596 "traddr": "10.0.0.2", 00:20:41.596 "adrfam": "ipv4", 00:20:41.596 "trsvcid": "4420", 00:20:41.596 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:41.596 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:41.596 "hdgst": false, 00:20:41.596 "ddgst": false 00:20:41.596 }, 00:20:41.596 "method": "bdev_nvme_attach_controller" 00:20:41.596 },{ 00:20:41.596 "params": { 00:20:41.596 "name": "Nvme4", 00:20:41.596 "trtype": "tcp", 00:20:41.596 "traddr": "10.0.0.2", 00:20:41.596 "adrfam": "ipv4", 00:20:41.596 "trsvcid": "4420", 00:20:41.596 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:41.596 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:41.596 "hdgst": false, 00:20:41.596 "ddgst": false 00:20:41.596 }, 00:20:41.596 "method": "bdev_nvme_attach_controller" 00:20:41.596 },{ 00:20:41.596 "params": { 00:20:41.596 "name": "Nvme5", 00:20:41.596 "trtype": "tcp", 00:20:41.596 "traddr": "10.0.0.2", 00:20:41.596 "adrfam": "ipv4", 00:20:41.596 "trsvcid": "4420", 00:20:41.596 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:41.596 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:41.596 "hdgst": false, 00:20:41.596 "ddgst": false 00:20:41.596 }, 00:20:41.596 "method": "bdev_nvme_attach_controller" 00:20:41.596 },{ 00:20:41.596 "params": { 00:20:41.596 "name": "Nvme6", 00:20:41.596 "trtype": "tcp", 00:20:41.596 "traddr": "10.0.0.2", 00:20:41.596 "adrfam": "ipv4", 00:20:41.596 "trsvcid": "4420", 00:20:41.596 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:41.596 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:41.596 "hdgst": false, 00:20:41.596 "ddgst": false 00:20:41.596 }, 00:20:41.596 "method": "bdev_nvme_attach_controller" 00:20:41.596 },{ 00:20:41.596 "params": { 00:20:41.596 "name": "Nvme7", 00:20:41.596 "trtype": "tcp", 00:20:41.596 "traddr": "10.0.0.2", 00:20:41.596 "adrfam": "ipv4", 00:20:41.596 "trsvcid": "4420", 00:20:41.596 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:41.596 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:41.596 "hdgst": false, 00:20:41.596 "ddgst": false 00:20:41.596 }, 00:20:41.596 "method": "bdev_nvme_attach_controller" 00:20:41.596 },{ 00:20:41.596 "params": { 00:20:41.596 "name": "Nvme8", 00:20:41.596 "trtype": "tcp", 00:20:41.596 "traddr": "10.0.0.2", 00:20:41.596 "adrfam": "ipv4", 00:20:41.596 "trsvcid": "4420", 00:20:41.596 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:41.596 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:41.596 "hdgst": false, 00:20:41.596 "ddgst": false 00:20:41.596 }, 00:20:41.596 "method": "bdev_nvme_attach_controller" 00:20:41.596 },{ 00:20:41.596 "params": { 00:20:41.596 "name": "Nvme9", 00:20:41.596 "trtype": "tcp", 00:20:41.596 "traddr": "10.0.0.2", 00:20:41.596 "adrfam": "ipv4", 00:20:41.596 "trsvcid": "4420", 00:20:41.596 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:41.596 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:41.596 "hdgst": false, 00:20:41.596 "ddgst": false 00:20:41.596 }, 00:20:41.596 "method": "bdev_nvme_attach_controller" 00:20:41.596 },{ 00:20:41.596 "params": { 00:20:41.596 "name": "Nvme10", 00:20:41.596 "trtype": "tcp", 00:20:41.596 "traddr": "10.0.0.2", 00:20:41.596 "adrfam": "ipv4", 00:20:41.596 "trsvcid": "4420", 00:20:41.596 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:41.596 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:41.596 "hdgst": false, 00:20:41.596 "ddgst": false 00:20:41.596 }, 00:20:41.596 "method": "bdev_nvme_attach_controller" 00:20:41.596 }' 00:20:41.854 [2024-07-24 19:49:58.978687] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:20:41.854 [2024-07-24 19:49:58.978762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1222799 ] 00:20:41.854 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.854 [2024-07-24 19:49:59.040866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.854 [2024-07-24 19:49:59.150153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.752 Running I/O for 10 seconds... 00:20:44.317 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:20:44.317 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@865 -- # return 0 00:20:44.317 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:44.317 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:44.317 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1222799 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' -z 1222799 ']' 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # kill -0 1222799 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # uname 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1222799 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1222799' 00:20:44.575 killing process with pid 1222799 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # kill 1222799 00:20:44.575 19:50:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@975 -- # wait 1222799 00:20:44.575 Received shutdown signal, test time was about 0.732009 seconds 00:20:44.575 00:20:44.575 Latency(us) 00:20:44.575 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.575 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.575 Verification LBA range: start 0x0 length 0x400 00:20:44.575 Nvme1n1 : 0.72 268.24 16.76 0.00 0.00 234768.06 18447.17 250104.79 00:20:44.575 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.575 Verification LBA range: start 0x0 length 0x400 00:20:44.575 Nvme2n1 : 0.73 264.06 16.50 0.00 0.00 231941.06 19320.98 245444.46 00:20:44.575 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.575 Verification LBA range: start 0x0 length 0x400 00:20:44.575 Nvme3n1 : 0.72 267.18 16.70 0.00 0.00 223235.54 17864.63 231463.44 00:20:44.575 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.575 Verification LBA range: start 0x0 length 0x400 00:20:44.575 Nvme4n1 : 0.71 270.34 16.90 0.00 0.00 214088.82 28156.21 239230.67 00:20:44.575 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.575 Verification LBA range: start 0x0 length 0x400 00:20:44.575 Nvme5n1 : 0.73 263.73 16.48 0.00 0.00 214018.02 38641.97 254765.13 00:20:44.575 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.575 Verification LBA range: start 0x0 length 0x400 00:20:44.575 Nvme6n1 : 0.69 185.40 11.59 0.00 0.00 293633.90 27573.67 256318.58 00:20:44.575 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.575 Verification LBA range: start 0x0 length 0x400 00:20:44.575 Nvme7n1 : 0.73 262.60 16.41 0.00 0.00 203153.00 18544.26 251658.24 00:20:44.575 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.575 Verification LBA range: start 0x0 length 0x400 00:20:44.575 Nvme8n1 : 0.68 186.92 11.68 0.00 0.00 272862.63 16117.00 229910.00 00:20:44.575 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.575 Verification LBA range: start 0x0 length 0x400 00:20:44.575 Nvme9n1 : 0.71 181.33 11.33 0.00 0.00 274889.77 22524.97 279620.27 00:20:44.575 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:44.575 Verification LBA range: start 0x0 length 0x400 00:20:44.575 Nvme10n1 : 0.70 182.91 11.43 0.00 0.00 262926.79 39224.51 259425.47 00:20:44.575 =================================================================================================================== 00:20:44.575 Total : 2332.70 145.79 0.00 0.00 237393.83 16117.00 279620.27 00:20:44.833 19:50:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:45.765 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1222487 00:20:45.765 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:45.765 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:45.765 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:45.765 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:45.765 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:45.765 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # nvmfcleanup 00:20:45.765 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:45.765 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:45.765 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:45.765 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:45.765 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:45.765 rmmod nvme_tcp 00:20:46.022 rmmod nvme_fabrics 00:20:46.022 rmmod nvme_keyring 00:20:46.022 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:46.022 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:46.022 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:46.022 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # '[' -n 1222487 ']' 00:20:46.022 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # killprocess 1222487 00:20:46.022 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' -z 1222487 ']' 00:20:46.022 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # kill -0 1222487 00:20:46.022 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # uname 00:20:46.022 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:20:46.022 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1222487 00:20:46.022 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:20:46.022 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:20:46.022 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1222487' 00:20:46.022 killing process with pid 1222487 00:20:46.022 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # kill 1222487 00:20:46.022 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@975 -- # wait 1222487 00:20:46.587 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:20:46.587 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:20:46.587 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:20:46.587 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:46.587 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@282 -- # remove_spdk_ns 00:20:46.587 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.587 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.587 19:50:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:20:48.487 00:20:48.487 real 0m8.574s 00:20:48.487 user 0m27.314s 00:20:48.487 sys 0m1.448s 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # xtrace_disable 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:48.487 ************************************ 00:20:48.487 END TEST nvmf_shutdown_tc2 00:20:48.487 ************************************ 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1108 -- # xtrace_disable 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:48.487 ************************************ 00:20:48.487 START TEST nvmf_shutdown_tc3 00:20:48.487 ************************************ 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # nvmf_shutdown_tc3 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@452 -- # prepare_net_devs 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # local -g is_hw=no 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # remove_spdk_ns 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # xtrace_disable 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # pci_devs=() 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -a pci_devs 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # pci_net_devs=() 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # pci_drivers=() 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -A pci_drivers 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@299 -- # net_devs=() 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@299 -- # local -ga net_devs 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@300 -- # e810=() 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@300 -- # local -ga e810 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # x722=() 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # local -ga x722 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # mlx=() 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # local -ga mlx 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:48.487 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:48.487 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # [[ up == up ]] 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:48.487 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.487 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # [[ up == up ]] 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:48.488 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # is_hw=yes 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:20:48.488 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:20:48.777 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.777 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:20:48.777 00:20:48.777 --- 10.0.0.2 ping statistics --- 00:20:48.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.777 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:48.777 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.777 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:20:48.777 00:20:48.777 --- 10.0.0.1 ping statistics --- 00:20:48.777 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.777 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # return 0 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@725 -- # xtrace_disable 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@485 -- # nvmfpid=1223708 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@486 -- # waitforlisten 1223708 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # '[' -z 1223708 ']' 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local max_retries=100 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@841 -- # xtrace_disable 00:20:48.777 19:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.777 [2024-07-24 19:50:06.035919] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:20:48.777 [2024-07-24 19:50:06.036006] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.777 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.777 [2024-07-24 19:50:06.104200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:49.034 [2024-07-24 19:50:06.222110] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.034 [2024-07-24 19:50:06.222179] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.034 [2024-07-24 19:50:06.222196] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.034 [2024-07-24 19:50:06.222210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.034 [2024-07-24 19:50:06.222221] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.034 [2024-07-24 19:50:06.222338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.034 [2024-07-24 19:50:06.222419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:49.034 [2024-07-24 19:50:06.222474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.034 [2024-07-24 19:50:06.222471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:49.966 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:20:49.966 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@865 -- # return 0 00:20:49.966 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:20:49.966 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@731 -- # xtrace_disable 00:20:49.966 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.966 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.966 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:49.966 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:49.966 19:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.966 [2024-07-24 19:50:06.998777] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@725 -- # xtrace_disable 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:49.966 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.966 Malloc1 00:20:49.966 [2024-07-24 19:50:07.073867] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.966 Malloc2 00:20:49.966 Malloc3 00:20:49.966 Malloc4 00:20:49.966 Malloc5 00:20:49.966 Malloc6 00:20:49.966 Malloc7 00:20:50.224 Malloc8 00:20:50.224 Malloc9 00:20:50.224 Malloc10 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@731 -- # xtrace_disable 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1223895 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1223895 /var/tmp/bdevperf.sock 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # '[' -z 1223895 ']' 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local max_retries=100 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@536 -- # config=() 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@536 -- # local subsystem config 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@841 -- # xtrace_disable 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:50.224 { 00:20:50.224 "params": { 00:20:50.224 "name": "Nvme$subsystem", 00:20:50.224 "trtype": "$TEST_TRANSPORT", 00:20:50.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.224 "adrfam": "ipv4", 00:20:50.224 "trsvcid": "$NVMF_PORT", 00:20:50.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.224 "hdgst": ${hdgst:-false}, 00:20:50.224 "ddgst": ${ddgst:-false} 00:20:50.224 }, 00:20:50.224 "method": "bdev_nvme_attach_controller" 00:20:50.224 } 00:20:50.224 EOF 00:20:50.224 )") 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # cat 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:50.224 { 00:20:50.224 "params": { 00:20:50.224 "name": "Nvme$subsystem", 00:20:50.224 "trtype": "$TEST_TRANSPORT", 00:20:50.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.224 "adrfam": "ipv4", 00:20:50.224 "trsvcid": "$NVMF_PORT", 00:20:50.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.224 "hdgst": ${hdgst:-false}, 00:20:50.224 "ddgst": ${ddgst:-false} 00:20:50.224 }, 00:20:50.224 "method": "bdev_nvme_attach_controller" 00:20:50.224 } 00:20:50.224 EOF 00:20:50.224 )") 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # cat 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:50.224 { 00:20:50.224 "params": { 00:20:50.224 "name": "Nvme$subsystem", 00:20:50.224 "trtype": "$TEST_TRANSPORT", 00:20:50.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.224 "adrfam": "ipv4", 00:20:50.224 "trsvcid": "$NVMF_PORT", 00:20:50.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.224 "hdgst": ${hdgst:-false}, 00:20:50.224 "ddgst": ${ddgst:-false} 00:20:50.224 }, 00:20:50.224 "method": "bdev_nvme_attach_controller" 00:20:50.224 } 00:20:50.224 EOF 00:20:50.224 )") 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # cat 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:50.224 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:50.224 { 00:20:50.224 "params": { 00:20:50.224 "name": "Nvme$subsystem", 00:20:50.224 "trtype": "$TEST_TRANSPORT", 00:20:50.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.224 "adrfam": "ipv4", 00:20:50.224 "trsvcid": "$NVMF_PORT", 00:20:50.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.224 "hdgst": ${hdgst:-false}, 00:20:50.224 "ddgst": ${ddgst:-false} 00:20:50.224 }, 00:20:50.224 "method": "bdev_nvme_attach_controller" 00:20:50.224 } 00:20:50.225 EOF 00:20:50.225 )") 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # cat 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:50.225 { 00:20:50.225 "params": { 00:20:50.225 "name": "Nvme$subsystem", 00:20:50.225 "trtype": "$TEST_TRANSPORT", 00:20:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.225 "adrfam": "ipv4", 00:20:50.225 "trsvcid": "$NVMF_PORT", 00:20:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.225 "hdgst": ${hdgst:-false}, 00:20:50.225 "ddgst": ${ddgst:-false} 00:20:50.225 }, 00:20:50.225 "method": "bdev_nvme_attach_controller" 00:20:50.225 } 00:20:50.225 EOF 00:20:50.225 )") 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # cat 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:50.225 { 00:20:50.225 "params": { 00:20:50.225 "name": "Nvme$subsystem", 00:20:50.225 "trtype": "$TEST_TRANSPORT", 00:20:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.225 "adrfam": "ipv4", 00:20:50.225 "trsvcid": "$NVMF_PORT", 00:20:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.225 "hdgst": ${hdgst:-false}, 00:20:50.225 "ddgst": ${ddgst:-false} 00:20:50.225 }, 00:20:50.225 "method": "bdev_nvme_attach_controller" 00:20:50.225 } 00:20:50.225 EOF 00:20:50.225 )") 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # cat 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:50.225 { 00:20:50.225 "params": { 00:20:50.225 "name": "Nvme$subsystem", 00:20:50.225 "trtype": "$TEST_TRANSPORT", 00:20:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.225 "adrfam": "ipv4", 00:20:50.225 "trsvcid": "$NVMF_PORT", 00:20:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.225 "hdgst": ${hdgst:-false}, 00:20:50.225 "ddgst": ${ddgst:-false} 00:20:50.225 }, 00:20:50.225 "method": "bdev_nvme_attach_controller" 00:20:50.225 } 00:20:50.225 EOF 00:20:50.225 )") 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # cat 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:50.225 { 00:20:50.225 "params": { 00:20:50.225 "name": "Nvme$subsystem", 00:20:50.225 "trtype": "$TEST_TRANSPORT", 00:20:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.225 "adrfam": "ipv4", 00:20:50.225 "trsvcid": "$NVMF_PORT", 00:20:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.225 "hdgst": ${hdgst:-false}, 00:20:50.225 "ddgst": ${ddgst:-false} 00:20:50.225 }, 00:20:50.225 "method": "bdev_nvme_attach_controller" 00:20:50.225 } 00:20:50.225 EOF 00:20:50.225 )") 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # cat 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:50.225 { 00:20:50.225 "params": { 00:20:50.225 "name": "Nvme$subsystem", 00:20:50.225 "trtype": "$TEST_TRANSPORT", 00:20:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.225 "adrfam": "ipv4", 00:20:50.225 "trsvcid": "$NVMF_PORT", 00:20:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.225 "hdgst": ${hdgst:-false}, 00:20:50.225 "ddgst": ${ddgst:-false} 00:20:50.225 }, 00:20:50.225 "method": "bdev_nvme_attach_controller" 00:20:50.225 } 00:20:50.225 EOF 00:20:50.225 )") 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # cat 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:20:50.225 { 00:20:50.225 "params": { 00:20:50.225 "name": "Nvme$subsystem", 00:20:50.225 "trtype": "$TEST_TRANSPORT", 00:20:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:50.225 "adrfam": "ipv4", 00:20:50.225 "trsvcid": "$NVMF_PORT", 00:20:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:50.225 "hdgst": ${hdgst:-false}, 00:20:50.225 "ddgst": ${ddgst:-false} 00:20:50.225 }, 00:20:50.225 "method": "bdev_nvme_attach_controller" 00:20:50.225 } 00:20:50.225 EOF 00:20:50.225 )") 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # cat 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # jq . 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@561 -- # IFS=, 00:20:50.225 19:50:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:20:50.225 "params": { 00:20:50.225 "name": "Nvme1", 00:20:50.225 "trtype": "tcp", 00:20:50.225 "traddr": "10.0.0.2", 00:20:50.225 "adrfam": "ipv4", 00:20:50.225 "trsvcid": "4420", 00:20:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.225 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:50.225 "hdgst": false, 00:20:50.225 "ddgst": false 00:20:50.225 }, 00:20:50.225 "method": "bdev_nvme_attach_controller" 00:20:50.225 },{ 00:20:50.225 "params": { 00:20:50.225 "name": "Nvme2", 00:20:50.225 "trtype": "tcp", 00:20:50.225 "traddr": "10.0.0.2", 00:20:50.225 "adrfam": "ipv4", 00:20:50.225 "trsvcid": "4420", 00:20:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:50.225 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:50.225 "hdgst": false, 00:20:50.225 "ddgst": false 00:20:50.225 }, 00:20:50.225 "method": "bdev_nvme_attach_controller" 00:20:50.225 },{ 00:20:50.225 "params": { 00:20:50.225 "name": "Nvme3", 00:20:50.225 "trtype": "tcp", 00:20:50.225 "traddr": "10.0.0.2", 00:20:50.225 "adrfam": "ipv4", 00:20:50.225 "trsvcid": "4420", 00:20:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:50.225 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:50.225 "hdgst": false, 00:20:50.225 "ddgst": false 00:20:50.225 }, 00:20:50.225 "method": "bdev_nvme_attach_controller" 00:20:50.225 },{ 00:20:50.225 "params": { 00:20:50.225 "name": "Nvme4", 00:20:50.225 "trtype": "tcp", 00:20:50.225 "traddr": "10.0.0.2", 00:20:50.225 "adrfam": "ipv4", 00:20:50.225 "trsvcid": "4420", 00:20:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:50.225 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:50.225 "hdgst": false, 00:20:50.225 "ddgst": false 00:20:50.225 }, 00:20:50.225 "method": "bdev_nvme_attach_controller" 00:20:50.225 },{ 00:20:50.225 "params": { 00:20:50.225 "name": "Nvme5", 00:20:50.225 "trtype": "tcp", 00:20:50.225 "traddr": "10.0.0.2", 00:20:50.225 "adrfam": "ipv4", 00:20:50.225 "trsvcid": "4420", 00:20:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:50.225 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:50.225 "hdgst": false, 00:20:50.225 "ddgst": false 00:20:50.225 }, 00:20:50.225 "method": "bdev_nvme_attach_controller" 00:20:50.225 },{ 00:20:50.225 "params": { 00:20:50.225 "name": "Nvme6", 00:20:50.225 "trtype": "tcp", 00:20:50.225 "traddr": "10.0.0.2", 00:20:50.225 "adrfam": "ipv4", 00:20:50.225 "trsvcid": "4420", 00:20:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:50.225 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:50.225 "hdgst": false, 00:20:50.225 "ddgst": false 00:20:50.225 }, 00:20:50.225 "method": "bdev_nvme_attach_controller" 00:20:50.225 },{ 00:20:50.225 "params": { 00:20:50.225 "name": "Nvme7", 00:20:50.225 "trtype": "tcp", 00:20:50.225 "traddr": "10.0.0.2", 00:20:50.225 "adrfam": "ipv4", 00:20:50.225 "trsvcid": "4420", 00:20:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:50.225 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:50.225 "hdgst": false, 00:20:50.225 "ddgst": false 00:20:50.225 }, 00:20:50.225 "method": "bdev_nvme_attach_controller" 00:20:50.225 },{ 00:20:50.225 "params": { 00:20:50.225 "name": "Nvme8", 00:20:50.225 "trtype": "tcp", 00:20:50.225 "traddr": "10.0.0.2", 00:20:50.225 "adrfam": "ipv4", 00:20:50.225 "trsvcid": "4420", 00:20:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:50.225 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:50.226 "hdgst": false, 00:20:50.226 "ddgst": false 00:20:50.226 }, 00:20:50.226 "method": "bdev_nvme_attach_controller" 00:20:50.226 },{ 00:20:50.226 "params": { 00:20:50.226 "name": "Nvme9", 00:20:50.226 "trtype": "tcp", 00:20:50.226 "traddr": "10.0.0.2", 00:20:50.226 "adrfam": "ipv4", 00:20:50.226 "trsvcid": "4420", 00:20:50.226 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:50.226 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:50.226 "hdgst": false, 00:20:50.226 "ddgst": false 00:20:50.226 }, 00:20:50.226 "method": "bdev_nvme_attach_controller" 00:20:50.226 },{ 00:20:50.226 "params": { 00:20:50.226 "name": "Nvme10", 00:20:50.226 "trtype": "tcp", 00:20:50.226 "traddr": "10.0.0.2", 00:20:50.226 "adrfam": "ipv4", 00:20:50.226 "trsvcid": "4420", 00:20:50.226 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:50.226 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:50.226 "hdgst": false, 00:20:50.226 "ddgst": false 00:20:50.226 }, 00:20:50.226 "method": "bdev_nvme_attach_controller" 00:20:50.226 }' 00:20:50.226 [2024-07-24 19:50:07.591263] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:20:50.226 [2024-07-24 19:50:07.591346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223895 ] 00:20:50.483 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.483 [2024-07-24 19:50:07.653613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.483 [2024-07-24 19:50:07.763151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.379 Running I/O for 10 seconds... 00:20:52.379 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:20:52.379 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@865 -- # return 0 00:20:52.379 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:52.379 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:52.379 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:52.379 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:52.379 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:52.379 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:52.379 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:52.380 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:52.380 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:52.380 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:52.380 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:52.380 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:52.380 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:52.380 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:52.380 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:52.380 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:52.380 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:52.380 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:52.380 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:52.380 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:52.638 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:52.638 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:52.638 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:52.638 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:52.638 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:52.638 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:52.638 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:52.638 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:52.638 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:52.638 19:50:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1223708 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' -z 1223708 ']' 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # kill -0 1223708 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # uname 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1223708 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1223708' 00:20:52.913 killing process with pid 1223708 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # kill 1223708 00:20:52.913 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@975 -- # wait 1223708 00:20:52.913 [2024-07-24 19:50:10.203277] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203391] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203408] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203420] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203432] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203446] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203458] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203470] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203482] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203494] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203506] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203518] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203529] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203546] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203559] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203571] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203584] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203596] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203608] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203620] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203632] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203644] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203657] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203678] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203691] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203703] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203715] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203727] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203739] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203751] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203763] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203775] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203787] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.913 [2024-07-24 19:50:10.203799] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203812] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203824] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203836] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203848] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203860] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203871] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203883] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203895] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203907] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203919] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203931] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203943] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203955] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203967] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203979] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.203991] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.204003] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.204022] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.204034] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.204046] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.204058] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.204070] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.204081] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.204093] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.204105] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.204117] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.204129] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.204141] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.204152] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb524c0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206697] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206737] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206753] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206766] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206778] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206791] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206803] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206815] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206827] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206840] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206852] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206864] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206876] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206889] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206900] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206918] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206931] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206943] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206955] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206967] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206979] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.206992] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207003] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207015] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207028] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207040] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207052] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207064] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207076] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207088] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207100] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207112] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207124] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207136] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207148] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207160] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207172] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207184] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207196] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207208] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207220] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207238] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207263] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207276] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207288] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207300] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207312] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207324] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207337] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207350] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207361] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207373] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207386] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207398] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.914 [2024-07-24 19:50:10.207410] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.207422] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.207434] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.207447] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.207459] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.207471] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.207483] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.207495] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.207508] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54fa0 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210276] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210303] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210317] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210329] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210342] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210355] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210371] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210384] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210397] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210409] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210421] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210434] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210446] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210458] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210470] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210482] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210494] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210507] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210519] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210538] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210550] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210563] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210575] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210587] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210601] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210613] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210625] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210637] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210652] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210667] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210679] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210691] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210703] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210715] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210731] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210743] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210756] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210768] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210781] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210793] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210805] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210817] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210829] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210841] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210853] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210865] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210876] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210889] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210901] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210913] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210925] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210937] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52980 is same with the state(6) to be set 00:20:52.915 [2024-07-24 19:50:10.210998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.915 [2024-07-24 19:50:10.211039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-07-24 19:50:10.211056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.915 [2024-07-24 19:50:10.211070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-07-24 19:50:10.211084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.915 [2024-07-24 19:50:10.211097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.915 [2024-07-24 19:50:10.211111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.915 [2024-07-24 19:50:10.211124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-07-24 19:50:10.211137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24354d0 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.211248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.916 [2024-07-24 19:50:10.211270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-07-24 19:50:10.211286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.916 [2024-07-24 19:50:10.211299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-07-24 19:50:10.211312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.916 [2024-07-24 19:50:10.211326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-07-24 19:50:10.211339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.916 [2024-07-24 19:50:10.211352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.916 [2024-07-24 19:50:10.211365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370830 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.213983] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214018] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214034] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214049] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214064] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214077] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214089] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214102] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214114] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214127] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214139] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214152] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214164] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214177] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214192] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214205] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214218] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214252] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214268] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214281] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214293] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214305] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214317] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214329] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214342] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214354] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214366] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214378] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214391] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214403] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214415] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214427] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214439] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214451] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214463] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214475] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214487] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214499] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214511] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214523] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214536] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214548] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214560] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214573] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214588] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214601] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214613] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214625] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214637] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214649] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214661] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214673] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214685] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214697] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214710] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214723] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214735] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214747] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214759] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214771] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214783] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214795] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.214807] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb52e40 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.215333] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:52.916 [2024-07-24 19:50:10.215838] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:52.916 [2024-07-24 19:50:10.220745] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.220798] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.220814] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.220826] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.220853] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.220866] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.916 [2024-07-24 19:50:10.220878] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.220895] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.220908] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.220921] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.220934] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.220946] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.220958] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.220970] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.220981] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.220994] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221005] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221017] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221029] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221041] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221053] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221065] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221077] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221089] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221102] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221113] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221126] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221138] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221150] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221161] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221173] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221186] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221197] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221209] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221221] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221251] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221267] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221279] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221292] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221303] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221315] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221327] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221339] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221351] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221363] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221374] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221386] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221398] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221410] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221422] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221433] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221445] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221457] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221470] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221482] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221494] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221505] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221517] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221535] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221547] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221559] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221571] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221586] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.221600] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53320 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222441] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222470] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222484] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222497] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222509] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222520] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222541] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222553] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222565] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222577] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222589] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222601] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222613] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222625] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222637] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222648] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222660] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222673] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222685] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222697] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222709] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222721] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222733] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222745] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.917 [2024-07-24 19:50:10.222757] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222774] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222787] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222799] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222811] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222823] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222835] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222847] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222859] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222871] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222883] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222894] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222906] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222918] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222930] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222942] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222954] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222966] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222978] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.222990] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223002] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223013] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223025] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223038] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223050] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223062] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223074] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223086] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223098] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223114] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223126] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223138] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223150] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223162] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223174] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223186] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223198] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223210] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.223222] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb537e0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224713] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224740] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224754] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224766] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224778] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224790] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224803] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224815] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224827] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224839] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224850] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224863] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224875] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224887] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224899] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224912] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224924] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224941] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224954] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224966] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224978] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.224990] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.225003] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.225015] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.225028] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.225040] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.225052] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.225064] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.225076] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.225088] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.225100] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.225112] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.918 [2024-07-24 19:50:10.225125] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225136] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225148] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225161] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225173] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225185] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225197] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225210] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225222] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225234] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225254] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225268] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225284] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225297] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225309] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225321] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225334] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225346] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225358] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225369] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225381] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225393] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225406] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225418] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225430] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225442] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225455] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225467] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225479] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225490] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.225502] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53ca0 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226786] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226814] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226828] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226841] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226853] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226866] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226878] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226890] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226902] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226919] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226932] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226945] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226957] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226969] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226981] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.226994] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227006] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227018] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227030] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227042] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227054] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227067] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227079] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227092] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227104] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227116] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227128] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227140] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227153] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227165] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227178] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227190] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227202] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227214] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227226] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227238] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227264] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227278] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227291] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227303] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227315] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227327] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227340] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227353] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227365] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227377] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227389] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227401] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227413] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227425] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227437] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.919 [2024-07-24 19:50:10.227449] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.227462] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.227474] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.227486] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.227498] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.227510] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.227522] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.227534] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.227547] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.227559] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.227572] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.227583] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54160 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228562] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228588] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228602] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228614] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228626] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228639] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228651] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228663] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228674] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228687] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228699] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228711] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228723] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228734] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228746] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228758] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228770] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228782] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228793] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228805] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228817] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228829] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228841] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228853] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228864] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228876] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228889] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228905] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228918] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228930] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228942] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228954] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228966] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228978] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.228990] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229002] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229014] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229026] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229038] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229050] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229062] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229074] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229086] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229098] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229110] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229122] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229134] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229146] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229158] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229170] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229181] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229194] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229207] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229219] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229231] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229255] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229269] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229282] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229295] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229307] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229319] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229331] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229343] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb54620 is same with the state(6) to be set 00:20:52.920 [2024-07-24 19:50:10.229554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24354d0 (9): Bad file descriptor 00:20:52.920 [2024-07-24 19:50:10.229628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.920 [2024-07-24 19:50:10.229651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.920 [2024-07-24 19:50:10.229666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.920 [2024-07-24 19:50:10.229679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.920 [2024-07-24 19:50:10.229693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.920 [2024-07-24 19:50:10.229706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.920 [2024-07-24 19:50:10.229720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.920 [2024-07-24 19:50:10.229732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.920 [2024-07-24 19:50:10.229745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244b660 is same with the state(6) to be set 00:20:52.921 [2024-07-24 19:50:10.229792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.229812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.229827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.229840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.229853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.229866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.229880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.229892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.229910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ec0 is same with the state(6) to be set 00:20:52.921 [2024-07-24 19:50:10.229957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.229977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.229991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2394b50 is same with the state(6) to be set 00:20:52.921 [2024-07-24 19:50:10.230098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370830 (9): Bad file descriptor 00:20:52.921 [2024-07-24 19:50:10.230160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239ac80 is same with the state(6) to be set 00:20:52.921 [2024-07-24 19:50:10.230337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2449590 is same with the state(6) to be set 00:20:52.921 [2024-07-24 19:50:10.230540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e72610 is same with the state(6) to be set 00:20:52.921 [2024-07-24 19:50:10.230698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:52.921 [2024-07-24 19:50:10.230803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2530f90 is same with the state(6) to be set 00:20:52.921 [2024-07-24 19:50:10.230921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.921 [2024-07-24 19:50:10.230942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.230968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.921 [2024-07-24 19:50:10.230983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.231004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.921 [2024-07-24 19:50:10.231018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.231033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.921 [2024-07-24 19:50:10.231047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.231062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.921 [2024-07-24 19:50:10.231075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.231091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.921 [2024-07-24 19:50:10.231104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.231119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.921 [2024-07-24 19:50:10.231132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.231147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.921 [2024-07-24 19:50:10.231161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.231175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.921 [2024-07-24 19:50:10.231188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.231204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.921 [2024-07-24 19:50:10.231217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.231232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.921 [2024-07-24 19:50:10.231256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.231273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.921 [2024-07-24 19:50:10.231287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.921 [2024-07-24 19:50:10.231303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.231972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.231987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.232000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.232014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.232027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.232042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.232055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.232070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.232087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.232102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.232116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.232131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.232144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.232158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.232171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.232186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.232199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.232214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.232227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.232247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.922 [2024-07-24 19:50:10.232262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.922 [2024-07-24 19:50:10.232277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.232790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.232872] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24d3da0 was disconnected and freed. reset controller. 00:20:52.923 [2024-07-24 19:50:10.234844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:52.923 [2024-07-24 19:50:10.234878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2394b50 (9): Bad file descriptor 00:20:52.923 [2024-07-24 19:50:10.234976] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:52.923 [2024-07-24 19:50:10.236266] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:52.923 [2024-07-24 19:50:10.236337] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:52.923 [2024-07-24 19:50:10.236405] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:52.923 [2024-07-24 19:50:10.236472] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:52.923 [2024-07-24 19:50:10.236764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.236790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.236814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.236829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.236846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.236860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.236876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.236890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.236905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.236918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.236934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.236948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.236963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.236977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.236993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.237007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.237022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.237036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.237051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.237065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.237088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.237103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.237119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.237132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.237148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.237162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.237177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.237191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.237206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.237219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.237238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.923 [2024-07-24 19:50:10.237261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.923 [2024-07-24 19:50:10.237276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2424280 is same with the state(6) to be set 00:20:52.923 [2024-07-24 19:50:10.237349] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2424280 was disconnected and freed. reset controller. 00:20:52.923 [2024-07-24 19:50:10.237519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.923 [2024-07-24 19:50:10.237553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2394b50 with addr=10.0.0.2, port=4420 00:20:52.923 [2024-07-24 19:50:10.237569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2394b50 is same with the state(6) to be set 00:20:52.923 [2024-07-24 19:50:10.237697] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:52.923 [2024-07-24 19:50:10.238614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:52.924 [2024-07-24 19:50:10.238688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ac6e0 (9): Bad file descriptor 00:20:52.924 [2024-07-24 19:50:10.238725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2394b50 (9): Bad file descriptor 00:20:52.924 [2024-07-24 19:50:10.238847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:52.924 [2024-07-24 19:50:10.238870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:52.924 [2024-07-24 19:50:10.238886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:52.924 [2024-07-24 19:50:10.239214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.924 [2024-07-24 19:50:10.239355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.924 [2024-07-24 19:50:10.239383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ac6e0 with addr=10.0.0.2, port=4420 00:20:52.924 [2024-07-24 19:50:10.239399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ac6e0 is same with the state(6) to be set 00:20:52.924 [2024-07-24 19:50:10.239492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ac6e0 (9): Bad file descriptor 00:20:52.924 [2024-07-24 19:50:10.239566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:52.924 [2024-07-24 19:50:10.239585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:52.924 [2024-07-24 19:50:10.239599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:52.924 [2024-07-24 19:50:10.239633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244b660 (9): Bad file descriptor 00:20:52.924 [2024-07-24 19:50:10.239665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a0ec0 (9): Bad file descriptor 00:20:52.924 [2024-07-24 19:50:10.239703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x239ac80 (9): Bad file descriptor 00:20:52.924 [2024-07-24 19:50:10.239733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2449590 (9): Bad file descriptor 00:20:52.924 [2024-07-24 19:50:10.239764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e72610 (9): Bad file descriptor 00:20:52.924 [2024-07-24 19:50:10.239796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2530f90 (9): Bad file descriptor 00:20:52.924 [2024-07-24 19:50:10.239886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.924 [2024-07-24 19:50:10.239955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.239977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.239999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.924 [2024-07-24 19:50:10.240846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.924 [2024-07-24 19:50:10.240862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.240875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.240890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.240904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.240920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.240937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.240953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.240967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.240983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.240997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.241872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.241886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242c250 is same with the state(6) to be set 00:20:52.925 [2024-07-24 19:50:10.243162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.243185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.243206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.243221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.243237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.243259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.243276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.243290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.243305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.243319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.925 [2024-07-24 19:50:10.243334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.925 [2024-07-24 19:50:10.243348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.243986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.243999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.926 [2024-07-24 19:50:10.244509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.926 [2024-07-24 19:50:10.244525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.244974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.244989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.245003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.245018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.245031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.245047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.927 [2024-07-24 19:50:10.245060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.927 [2024-07-24 19:50:10.245074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2425730 is same with the state(6) to be set 00:20:52.927 [2024-07-24 19:50:10.247161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:52.927 [2024-07-24 19:50:10.247197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:52.927 [2024-07-24 19:50:10.247592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.927 [2024-07-24 19:50:10.247623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2370830 with addr=10.0.0.2, port=4420 00:20:52.927 [2024-07-24 19:50:10.247641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370830 is same with the state(6) to be set 00:20:52.927 [2024-07-24 19:50:10.247755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.927 [2024-07-24 19:50:10.247783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24354d0 with addr=10.0.0.2, port=4420 00:20:52.927 [2024-07-24 19:50:10.247799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24354d0 is same with the state(6) to be set 00:20:52.927 [2024-07-24 19:50:10.248396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:52.927 [2024-07-24 19:50:10.248440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370830 (9): Bad file descriptor 00:20:52.927 [2024-07-24 19:50:10.248471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24354d0 (9): Bad file descriptor 00:20:52.927 [2024-07-24 19:50:10.248644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.927 [2024-07-24 19:50:10.248671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2394b50 with addr=10.0.0.2, port=4420 00:20:52.927 [2024-07-24 19:50:10.248687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2394b50 is same with the state(6) to be set 00:20:52.927 [2024-07-24 19:50:10.248702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:52.927 [2024-07-24 19:50:10.248715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:52.927 [2024-07-24 19:50:10.248729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:52.927 [2024-07-24 19:50:10.248749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:52.927 [2024-07-24 19:50:10.248763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:52.927 [2024-07-24 19:50:10.248775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:52.927 [2024-07-24 19:50:10.248850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.927 [2024-07-24 19:50:10.248870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.927 [2024-07-24 19:50:10.248885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2394b50 (9): Bad file descriptor 00:20:52.927 [2024-07-24 19:50:10.248935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:52.927 [2024-07-24 19:50:10.248951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:52.927 [2024-07-24 19:50:10.248963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:52.927 [2024-07-24 19:50:10.249023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.927 [2024-07-24 19:50:10.249081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:52.927 [2024-07-24 19:50:10.249238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:52.927 [2024-07-24 19:50:10.249274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ac6e0 with addr=10.0.0.2, port=4420 00:20:52.927 [2024-07-24 19:50:10.249290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ac6e0 is same with the state(6) to be set 00:20:52.927 [2024-07-24 19:50:10.249344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ac6e0 (9): Bad file descriptor 00:20:52.927 [2024-07-24 19:50:10.249396] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:52.927 [2024-07-24 19:50:10.249413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:52.927 [2024-07-24 19:50:10.249426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:52.928 [2024-07-24 19:50:10.249479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:52.928 [2024-07-24 19:50:10.249752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.249776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.249800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.249821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.249839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.249853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.249868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.249881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.249897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.249910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.249926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.249940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.249955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.249968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.249984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.249997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.928 [2024-07-24 19:50:10.250896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.928 [2024-07-24 19:50:10.250910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.250926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.250943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.250960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.250974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.250990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.251651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.251665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d5190 is same with the state(6) to be set 00:20:52.929 [2024-07-24 19:50:10.252928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.252952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.252972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.252987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.253003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.253016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.253032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.253045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.253061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.253074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.253089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.253103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.253118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.253131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.253147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.253161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.253176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.253190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.253206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.253219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.253234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.253256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.253273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.253287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.253302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.929 [2024-07-24 19:50:10.253321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.929 [2024-07-24 19:50:10.253337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.253982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.253995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.930 [2024-07-24 19:50:10.254492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.930 [2024-07-24 19:50:10.254506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.254522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.254535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.254551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.254564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.254579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.254593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.254608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.254622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.254637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.254651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.254666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.254679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.254695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.254708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.254724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.254738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.254753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.254767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.254782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.254799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.254816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.254830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.254844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2369df0 is same with the state(6) to be set 00:20:52.931 [2024-07-24 19:50:10.256057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.931 [2024-07-24 19:50:10.256717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.931 [2024-07-24 19:50:10.256733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.256747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.256766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.256780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.256795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.256809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.256824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.256837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.256853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.256867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.256882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.256896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.256911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.256925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.256940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.256953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.256968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.256982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.256997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.932 [2024-07-24 19:50:10.257886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.932 [2024-07-24 19:50:10.257900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.257915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.257929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.257944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.257958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.257972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236b240 is same with the state(6) to be set 00:20:52.933 [2024-07-24 19:50:10.259199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.259985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.259999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.260014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.260027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.260043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.260056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.260071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.260085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.260100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.260114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.260129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.260142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.260157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.260170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.260185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.260199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.260219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.260233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.260259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.933 [2024-07-24 19:50:10.260295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.933 [2024-07-24 19:50:10.260315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.260977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.260991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.261006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.261020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.261035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.261048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.261064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.261078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.261093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.261106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.261120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x236c730 is same with the state(6) to be set 00:20:52.934 [2024-07-24 19:50:10.262369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.262392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.262415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.262430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.262446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.262460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.262476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.262490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.262505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.262519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.262534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.262548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.262564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.262577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.262597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.262612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.262627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.262640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.262656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.262669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.262685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.934 [2024-07-24 19:50:10.262698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.934 [2024-07-24 19:50:10.262714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.262727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.262743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.262756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.262772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.262785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.262800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.262814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.262830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.262843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.262858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.262872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.262888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.262901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.262917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.262930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.262946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.262963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.262979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.262993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.935 [2024-07-24 19:50:10.263857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.935 [2024-07-24 19:50:10.263870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.263886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.263899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.263914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.263927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.263943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.263957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.263973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.263986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.264002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.264015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.264030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.264043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.264062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.264077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.264092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.264106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.264121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.264134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.264149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.264163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.264178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.264191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.264207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.264220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.264236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.264259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.264274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2c9ddc0 is same with the state(6) to be set 00:20:52.936 [2024-07-24 19:50:10.265497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.265981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.265994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.266010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.266027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.266043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.266057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.936 [2024-07-24 19:50:10.266072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.936 [2024-07-24 19:50:10.266086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.266974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.266989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.267003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.267018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.267031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.267046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.267059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.267075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.267088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.267103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.267117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.267135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.267149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.267165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.267179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.267194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.267207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.267223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.937 [2024-07-24 19:50:10.267236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.937 [2024-07-24 19:50:10.267257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.938 [2024-07-24 19:50:10.267271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.938 [2024-07-24 19:50:10.267287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.938 [2024-07-24 19:50:10.267300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.938 [2024-07-24 19:50:10.267315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.938 [2024-07-24 19:50:10.267328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.938 [2024-07-24 19:50:10.267344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.938 [2024-07-24 19:50:10.267358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.938 [2024-07-24 19:50:10.267373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:52.938 [2024-07-24 19:50:10.267386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:52.938 [2024-07-24 19:50:10.267400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2e45830 is same with the state(6) to be set 00:20:52.938 [2024-07-24 19:50:10.268973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:52.938 [2024-07-24 19:50:10.269005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:52.938 [2024-07-24 19:50:10.269024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:52.938 [2024-07-24 19:50:10.269043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:52.938 [2024-07-24 19:50:10.269173] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:52.938 [2024-07-24 19:50:10.269199] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:52.938 [2024-07-24 19:50:10.269516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:53.196 task offset: 24064 on job bdev=Nvme2n1 fails 00:20:53.196 00:20:53.196 Latency(us) 00:20:53.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:53.196 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.196 Job: Nvme1n1 ended in about 0.88 seconds with error 00:20:53.196 Verification LBA range: start 0x0 length 0x400 00:20:53.196 Nvme1n1 : 0.88 144.75 9.05 72.38 0.00 291464.28 21651.15 276513.37 00:20:53.196 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.196 Job: Nvme2n1 ended in about 0.88 seconds with error 00:20:53.196 Verification LBA range: start 0x0 length 0x400 00:20:53.196 Nvme2n1 : 0.88 214.68 13.42 73.08 0.00 215069.91 4563.25 246997.90 00:20:53.196 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.196 Job: Nvme3n1 ended in about 0.89 seconds with error 00:20:53.196 Verification LBA range: start 0x0 length 0x400 00:20:53.196 Nvme3n1 : 0.89 220.35 13.77 71.59 0.00 207563.03 18350.08 240784.12 00:20:53.196 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.196 Job: Nvme4n1 ended in about 0.90 seconds with error 00:20:53.196 Verification LBA range: start 0x0 length 0x400 00:20:53.196 Nvme4n1 : 0.90 219.58 13.72 71.34 0.00 203855.74 15049.01 254765.13 00:20:53.196 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.196 Job: Nvme5n1 ended in about 0.90 seconds with error 00:20:53.196 Verification LBA range: start 0x0 length 0x400 00:20:53.196 Nvme5n1 : 0.90 142.17 8.89 71.09 0.00 272176.17 21845.33 253211.69 00:20:53.196 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.196 Job: Nvme6n1 ended in about 0.90 seconds with error 00:20:53.196 Verification LBA range: start 0x0 length 0x400 00:20:53.196 Nvme6n1 : 0.90 141.68 8.85 70.84 0.00 267302.43 19418.07 256318.58 00:20:53.196 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.196 Job: Nvme7n1 ended in about 0.91 seconds with error 00:20:53.196 Verification LBA range: start 0x0 length 0x400 00:20:53.196 Nvme7n1 : 0.91 141.19 8.82 70.59 0.00 262486.34 33981.63 237677.23 00:20:53.196 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.197 Job: Nvme8n1 ended in about 0.91 seconds with error 00:20:53.197 Verification LBA range: start 0x0 length 0x400 00:20:53.197 Nvme8n1 : 0.91 140.71 8.79 70.35 0.00 257682.65 19903.53 237677.23 00:20:53.197 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.197 Job: Nvme9n1 ended in about 0.88 seconds with error 00:20:53.197 Verification LBA range: start 0x0 length 0x400 00:20:53.197 Nvme9n1 : 0.88 218.22 13.64 18.18 0.00 222698.47 23204.60 234570.33 00:20:53.197 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:53.197 Job: Nvme10n1 ended in about 0.89 seconds with error 00:20:53.197 Verification LBA range: start 0x0 length 0x400 00:20:53.197 Nvme10n1 : 0.89 144.24 9.01 72.12 0.00 238356.61 21165.70 281173.71 00:20:53.197 =================================================================================================================== 00:20:53.197 Total : 1727.57 107.97 661.56 0.00 240429.78 4563.25 281173.71 00:20:53.197 [2024-07-24 19:50:10.297046] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:53.197 [2024-07-24 19:50:10.297128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:53.197 [2024-07-24 19:50:10.297470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.197 [2024-07-24 19:50:10.297509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239ac80 with addr=10.0.0.2, port=4420 00:20:53.197 [2024-07-24 19:50:10.297534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239ac80 is same with the state(6) to be set 00:20:53.197 [2024-07-24 19:50:10.297652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.197 [2024-07-24 19:50:10.297678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a0ec0 with addr=10.0.0.2, port=4420 00:20:53.197 [2024-07-24 19:50:10.297707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a0ec0 is same with the state(6) to be set 00:20:53.197 [2024-07-24 19:50:10.297842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.197 [2024-07-24 19:50:10.297867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244b660 with addr=10.0.0.2, port=4420 00:20:53.197 [2024-07-24 19:50:10.297883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244b660 is same with the state(6) to be set 00:20:53.197 [2024-07-24 19:50:10.298008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.197 [2024-07-24 19:50:10.298032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2449590 with addr=10.0.0.2, port=4420 00:20:53.197 [2024-07-24 19:50:10.298048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2449590 is same with the state(6) to be set 00:20:53.197 [2024-07-24 19:50:10.299749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:53.197 [2024-07-24 19:50:10.299780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:53.197 [2024-07-24 19:50:10.299798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:53.197 [2024-07-24 19:50:10.299815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:53.197 [2024-07-24 19:50:10.299975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.197 [2024-07-24 19:50:10.300003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e72610 with addr=10.0.0.2, port=4420 00:20:53.197 [2024-07-24 19:50:10.300019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e72610 is same with the state(6) to be set 00:20:53.197 [2024-07-24 19:50:10.300133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.197 [2024-07-24 19:50:10.300158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2530f90 with addr=10.0.0.2, port=4420 00:20:53.197 [2024-07-24 19:50:10.300173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2530f90 is same with the state(6) to be set 00:20:53.197 [2024-07-24 19:50:10.300198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x239ac80 (9): Bad file descriptor 00:20:53.197 [2024-07-24 19:50:10.300221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a0ec0 (9): Bad file descriptor 00:20:53.197 [2024-07-24 19:50:10.300240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244b660 (9): Bad file descriptor 00:20:53.197 [2024-07-24 19:50:10.300296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2449590 (9): Bad file descriptor 00:20:53.197 [2024-07-24 19:50:10.300352] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:53.197 [2024-07-24 19:50:10.300376] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:53.197 [2024-07-24 19:50:10.300396] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:53.197 [2024-07-24 19:50:10.300416] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:53.197 [2024-07-24 19:50:10.300639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.197 [2024-07-24 19:50:10.300667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24354d0 with addr=10.0.0.2, port=4420 00:20:53.197 [2024-07-24 19:50:10.300683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24354d0 is same with the state(6) to be set 00:20:53.197 [2024-07-24 19:50:10.300777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.197 [2024-07-24 19:50:10.300807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2370830 with addr=10.0.0.2, port=4420 00:20:53.197 [2024-07-24 19:50:10.300823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2370830 is same with the state(6) to be set 00:20:53.197 [2024-07-24 19:50:10.300922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.197 [2024-07-24 19:50:10.300947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2394b50 with addr=10.0.0.2, port=4420 00:20:53.197 [2024-07-24 19:50:10.300963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2394b50 is same with the state(6) to be set 00:20:53.197 [2024-07-24 19:50:10.301063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:53.197 [2024-07-24 19:50:10.301088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ac6e0 with addr=10.0.0.2, port=4420 00:20:53.197 [2024-07-24 19:50:10.301104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ac6e0 is same with the state(6) to be set 00:20:53.197 [2024-07-24 19:50:10.301122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e72610 (9): Bad file descriptor 00:20:53.197 [2024-07-24 19:50:10.301141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2530f90 (9): Bad file descriptor 00:20:53.197 [2024-07-24 19:50:10.301158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:53.197 [2024-07-24 19:50:10.301170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:53.197 [2024-07-24 19:50:10.301186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:53.197 [2024-07-24 19:50:10.301205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:53.197 [2024-07-24 19:50:10.301219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:53.197 [2024-07-24 19:50:10.301237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:53.197 [2024-07-24 19:50:10.301262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:53.197 [2024-07-24 19:50:10.301276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:53.197 [2024-07-24 19:50:10.301289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:53.197 [2024-07-24 19:50:10.301306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:53.197 [2024-07-24 19:50:10.301319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:53.197 [2024-07-24 19:50:10.301332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:53.197 [2024-07-24 19:50:10.301428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:53.197 [2024-07-24 19:50:10.301449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:53.197 [2024-07-24 19:50:10.301461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:53.197 [2024-07-24 19:50:10.301472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:53.197 [2024-07-24 19:50:10.301487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24354d0 (9): Bad file descriptor 00:20:53.197 [2024-07-24 19:50:10.301506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2370830 (9): Bad file descriptor 00:20:53.197 [2024-07-24 19:50:10.301524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2394b50 (9): Bad file descriptor 00:20:53.197 [2024-07-24 19:50:10.301544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ac6e0 (9): Bad file descriptor 00:20:53.197 [2024-07-24 19:50:10.301564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:53.197 [2024-07-24 19:50:10.301577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:53.197 [2024-07-24 19:50:10.301590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:53.197 [2024-07-24 19:50:10.301606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:53.197 [2024-07-24 19:50:10.301620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:53.197 [2024-07-24 19:50:10.301633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:53.197 [2024-07-24 19:50:10.301671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:53.197 [2024-07-24 19:50:10.301689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:53.197 [2024-07-24 19:50:10.301701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:53.197 [2024-07-24 19:50:10.301713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:53.197 [2024-07-24 19:50:10.301726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:53.197 [2024-07-24 19:50:10.301742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:53.197 [2024-07-24 19:50:10.301755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:53.197 [2024-07-24 19:50:10.301767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:53.197 [2024-07-24 19:50:10.301783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:53.198 [2024-07-24 19:50:10.301795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:53.198 [2024-07-24 19:50:10.301808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:53.198 [2024-07-24 19:50:10.301823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:53.198 [2024-07-24 19:50:10.301835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:53.198 [2024-07-24 19:50:10.301848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:53.198 [2024-07-24 19:50:10.301884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:53.198 [2024-07-24 19:50:10.301900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:53.198 [2024-07-24 19:50:10.301912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:53.198 [2024-07-24 19:50:10.301923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:53.455 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:53.455 19:50:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1223895 00:20:54.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1223895) - No such process 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # nvmfcleanup 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:54.829 rmmod nvme_tcp 00:20:54.829 rmmod nvme_fabrics 00:20:54.829 rmmod nvme_keyring 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # '[' -n '' ']' 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@282 -- # remove_spdk_ns 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:54.829 19:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.730 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:20:56.730 00:20:56.730 real 0m8.082s 00:20:56.730 user 0m20.736s 00:20:56.730 sys 0m1.439s 00:20:56.730 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # xtrace_disable 00:20:56.731 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:56.731 ************************************ 00:20:56.731 END TEST nvmf_shutdown_tc3 00:20:56.731 ************************************ 00:20:56.731 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:56.731 00:20:56.731 real 0m29.342s 00:20:56.731 user 1m25.357s 00:20:56.731 sys 0m6.234s 00:20:56.731 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # xtrace_disable 00:20:56.731 19:50:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:56.731 ************************************ 00:20:56.731 END TEST nvmf_shutdown 00:20:56.731 ************************************ 00:20:56.731 19:50:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:20:56.731 00:20:56.731 real 10m50.235s 00:20:56.731 user 25m39.846s 00:20:56.731 sys 2m30.596s 00:20:56.731 19:50:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # xtrace_disable 00:20:56.731 19:50:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:56.731 ************************************ 00:20:56.731 END TEST nvmf_target_extra 00:20:56.731 ************************************ 00:20:56.731 19:50:13 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:56.731 19:50:13 nvmf_tcp -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:20:56.731 19:50:13 nvmf_tcp -- common/autotest_common.sh@1108 -- # xtrace_disable 00:20:56.731 19:50:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:56.731 ************************************ 00:20:56.731 START TEST nvmf_host 00:20:56.731 ************************************ 00:20:56.731 19:50:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:56.731 * Looking for test storage... 00:20:56.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:56.731 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.731 ************************************ 00:20:56.731 START TEST nvmf_multicontroller 00:20:56.731 ************************************ 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:56.731 * Looking for test storage... 00:20:56.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.731 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:56.732 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@452 -- # prepare_net_devs 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # local -g is_hw=no 00:20:56.732 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # remove_spdk_ns 00:20:56.989 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.989 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:56.989 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.990 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:20:56.990 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:20:56.990 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # xtrace_disable 00:20:56.990 19:50:14 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # pci_devs=() 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -a pci_devs 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # pci_net_devs=() 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # pci_drivers=() 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -A pci_drivers 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # net_devs=() 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # local -ga net_devs 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # e810=() 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # local -ga e810 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # x722=() 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # local -ga x722 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # mlx=() 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # local -ga mlx 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:58.889 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:58.889 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # [[ up == up ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:58.889 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # [[ up == up ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:58.889 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # is_hw=yes 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.889 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:20:58.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:20:58.890 00:20:58.890 --- 10.0.0.2 ping statistics --- 00:20:58.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.890 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:20:58.890 00:20:58.890 --- 10.0.0.1 ping statistics --- 00:20:58.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.890 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # return 0 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:20:58.890 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:20:59.148 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:59.148 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:20:59.148 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@725 -- # xtrace_disable 00:20:59.148 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.148 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@485 -- # nvmfpid=1226448 00:20:59.148 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:59.148 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@486 -- # waitforlisten 1226448 00:20:59.148 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@832 -- # '[' -z 1226448 ']' 00:20:59.148 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:59.148 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local max_retries=100 00:20:59.148 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:59.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:59.148 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@841 -- # xtrace_disable 00:20:59.148 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.148 [2024-07-24 19:50:16.338624] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:20:59.148 [2024-07-24 19:50:16.338709] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:59.148 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.148 [2024-07-24 19:50:16.414232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:59.407 [2024-07-24 19:50:16.530877] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.407 [2024-07-24 19:50:16.530939] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.407 [2024-07-24 19:50:16.530956] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.407 [2024-07-24 19:50:16.530969] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.407 [2024-07-24 19:50:16.530981] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.407 [2024-07-24 19:50:16.531064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.407 [2024-07-24 19:50:16.531177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:59.407 [2024-07-24 19:50:16.531182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@865 -- # return 0 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@731 -- # xtrace_disable 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.407 [2024-07-24 19:50:16.678195] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.407 Malloc0 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.407 [2024-07-24 19:50:16.743533] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.407 [2024-07-24 19:50:16.751371] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.407 Malloc1 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.407 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1226470 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1226470 /var/tmp/bdevperf.sock 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@832 -- # '[' -z 1226470 ']' 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local max_retries=100 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@841 -- # xtrace_disable 00:20:59.666 19:50:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.924 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:20:59.924 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@865 -- # return 0 00:20:59.924 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:59.924 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.924 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.924 NVMe0n1 00:20:59.924 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:59.924 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:59.924 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:59.924 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.924 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.924 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:20:59.924 1 00:20:59.924 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:59.924 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # local es=0 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@639 -- # local arg=rpc_cmd 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@643 -- # type -t rpc_cmd 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.925 request: 00:20:59.925 { 00:20:59.925 "name": "NVMe0", 00:20:59.925 "trtype": "tcp", 00:20:59.925 "traddr": "10.0.0.2", 00:20:59.925 "adrfam": "ipv4", 00:20:59.925 "trsvcid": "4420", 00:20:59.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.925 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:59.925 "hostaddr": "10.0.0.2", 00:20:59.925 "hostsvcid": "60000", 00:20:59.925 "prchk_reftag": false, 00:20:59.925 "prchk_guard": false, 00:20:59.925 "hdgst": false, 00:20:59.925 "ddgst": false, 00:20:59.925 "method": "bdev_nvme_attach_controller", 00:20:59.925 "req_id": 1 00:20:59.925 } 00:20:59.925 Got JSON-RPC error response 00:20:59.925 response: 00:20:59.925 { 00:20:59.925 "code": -114, 00:20:59.925 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:59.925 } 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # es=1 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # local es=0 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@639 -- # local arg=rpc_cmd 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@643 -- # type -t rpc_cmd 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.925 request: 00:20:59.925 { 00:20:59.925 "name": "NVMe0", 00:20:59.925 "trtype": "tcp", 00:20:59.925 "traddr": "10.0.0.2", 00:20:59.925 "adrfam": "ipv4", 00:20:59.925 "trsvcid": "4420", 00:20:59.925 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:59.925 "hostaddr": "10.0.0.2", 00:20:59.925 "hostsvcid": "60000", 00:20:59.925 "prchk_reftag": false, 00:20:59.925 "prchk_guard": false, 00:20:59.925 "hdgst": false, 00:20:59.925 "ddgst": false, 00:20:59.925 "method": "bdev_nvme_attach_controller", 00:20:59.925 "req_id": 1 00:20:59.925 } 00:20:59.925 Got JSON-RPC error response 00:20:59.925 response: 00:20:59.925 { 00:20:59.925 "code": -114, 00:20:59.925 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:59.925 } 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # es=1 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # local es=0 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@639 -- # local arg=rpc_cmd 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@643 -- # type -t rpc_cmd 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:59.925 request: 00:20:59.925 { 00:20:59.925 "name": "NVMe0", 00:20:59.925 "trtype": "tcp", 00:20:59.925 "traddr": "10.0.0.2", 00:20:59.925 "adrfam": "ipv4", 00:20:59.925 "trsvcid": "4420", 00:20:59.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.925 "hostaddr": "10.0.0.2", 00:20:59.925 "hostsvcid": "60000", 00:20:59.925 "prchk_reftag": false, 00:20:59.925 "prchk_guard": false, 00:20:59.925 "hdgst": false, 00:20:59.925 "ddgst": false, 00:20:59.925 "multipath": "disable", 00:20:59.925 "method": "bdev_nvme_attach_controller", 00:20:59.925 "req_id": 1 00:20:59.925 } 00:20:59.925 Got JSON-RPC error response 00:20:59.925 response: 00:20:59.925 { 00:20:59.925 "code": -114, 00:20:59.925 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:59.925 } 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # es=1 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # local es=0 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@639 -- # local arg=rpc_cmd 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@643 -- # type -t rpc_cmd 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:20:59.925 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:00.183 request: 00:21:00.183 { 00:21:00.183 "name": "NVMe0", 00:21:00.183 "trtype": "tcp", 00:21:00.183 "traddr": "10.0.0.2", 00:21:00.183 "adrfam": "ipv4", 00:21:00.183 "trsvcid": "4420", 00:21:00.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.183 "hostaddr": "10.0.0.2", 00:21:00.183 "hostsvcid": "60000", 00:21:00.183 "prchk_reftag": false, 00:21:00.183 "prchk_guard": false, 00:21:00.183 "hdgst": false, 00:21:00.183 "ddgst": false, 00:21:00.183 "multipath": "failover", 00:21:00.183 "method": "bdev_nvme_attach_controller", 00:21:00.183 "req_id": 1 00:21:00.183 } 00:21:00.183 Got JSON-RPC error response 00:21:00.183 response: 00:21:00.183 { 00:21:00.183 "code": -114, 00:21:00.183 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:00.183 } 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # es=1 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:00.183 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:00.183 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:00.441 00:21:00.441 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:00.441 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:00.441 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:00.441 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:00.441 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:00.441 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:00.441 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:00.441 19:50:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:01.813 0 00:21:01.813 19:50:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:01.813 19:50:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:01.813 19:50:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:01.813 19:50:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:01.813 19:50:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1226470 00:21:01.813 19:50:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' -z 1226470 ']' 00:21:01.813 19:50:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # kill -0 1226470 00:21:01.813 19:50:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # uname 00:21:01.813 19:50:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:21:01.813 19:50:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1226470 00:21:01.813 19:50:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:21:01.813 19:50:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:21:01.813 19:50:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1226470' 00:21:01.813 killing process with pid 1226470 00:21:01.813 19:50:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # kill 1226470 00:21:01.813 19:50:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@975 -- # wait 1226470 00:21:01.813 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.813 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:01.813 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:01.813 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:01.813 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:01.813 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:01.813 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:01.813 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:01.814 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:01.814 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:01.814 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # read -r file 00:21:01.814 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:01.814 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # sort -u 00:21:01.814 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # cat 00:21:01.814 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:01.814 [2024-07-24 19:50:16.858476] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:21:01.814 [2024-07-24 19:50:16.858595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226470 ] 00:21:01.814 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.814 [2024-07-24 19:50:16.922875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.814 [2024-07-24 19:50:17.033910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.814 [2024-07-24 19:50:17.693793] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name b5d7cfeb-63e6-42a3-96e2-9f2df9eaaed1 already exists 00:21:01.814 [2024-07-24 19:50:17.693834] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:b5d7cfeb-63e6-42a3-96e2-9f2df9eaaed1 alias for bdev NVMe1n1 00:21:01.814 [2024-07-24 19:50:17.693864] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:01.814 Running I/O for 1 seconds... 00:21:01.814 00:21:01.814 Latency(us) 00:21:01.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.814 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:01.814 NVMe0n1 : 1.00 19216.83 75.07 0.00 0.00 6650.60 4150.61 13786.83 00:21:01.814 =================================================================================================================== 00:21:01.814 Total : 19216.83 75.07 0.00 0.00 6650.60 4150.61 13786.83 00:21:01.814 Received shutdown signal, test time was about 1.000000 seconds 00:21:01.814 00:21:01.814 Latency(us) 00:21:01.814 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.814 =================================================================================================================== 00:21:01.814 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:01.814 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:01.814 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1619 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:01.814 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # read -r file 00:21:01.814 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:01.814 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # nvmfcleanup 00:21:01.814 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:01.814 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:01.814 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:01.814 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:01.814 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:01.814 rmmod nvme_tcp 00:21:01.814 rmmod nvme_fabrics 00:21:01.814 rmmod nvme_keyring 00:21:02.072 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:02.072 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:02.072 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:02.072 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # '[' -n 1226448 ']' 00:21:02.072 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # killprocess 1226448 00:21:02.072 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' -z 1226448 ']' 00:21:02.072 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # kill -0 1226448 00:21:02.072 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # uname 00:21:02.072 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:21:02.072 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1226448 00:21:02.072 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:21:02.072 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:21:02.072 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1226448' 00:21:02.072 killing process with pid 1226448 00:21:02.072 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # kill 1226448 00:21:02.072 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@975 -- # wait 1226448 00:21:02.330 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:21:02.330 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:21:02.330 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:21:02.330 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:02.330 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@282 -- # remove_spdk_ns 00:21:02.330 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.330 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.330 19:50:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.288 19:50:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:21:04.288 00:21:04.288 real 0m7.531s 00:21:04.288 user 0m11.984s 00:21:04.288 sys 0m2.270s 00:21:04.288 19:50:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # xtrace_disable 00:21:04.288 19:50:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:04.288 ************************************ 00:21:04.288 END TEST nvmf_multicontroller 00:21:04.288 ************************************ 00:21:04.288 19:50:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:04.288 19:50:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:21:04.288 19:50:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:21:04.288 19:50:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.288 ************************************ 00:21:04.288 START TEST nvmf_aer 00:21:04.288 ************************************ 00:21:04.288 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:04.547 * Looking for test storage... 00:21:04.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:04.547 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:04.547 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:04.547 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:04.547 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:04.547 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:04.547 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:04.547 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:04.547 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:04.547 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:04.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@452 -- # prepare_net_devs 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # local -g is_hw=no 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # remove_spdk_ns 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # xtrace_disable 00:21:04.548 19:50:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # pci_devs=() 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -a pci_devs 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # pci_net_devs=() 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # pci_drivers=() 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -A pci_drivers 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # net_devs=() 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # local -ga net_devs 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # e810=() 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # local -ga e810 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # x722=() 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # local -ga x722 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # mlx=() 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # local -ga mlx 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:06.486 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:06.486 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # [[ up == up ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:06.486 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # [[ up == up ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:06.486 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # is_hw=yes 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:21:06.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:06.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:21:06.486 00:21:06.486 --- 10.0.0.2 ping statistics --- 00:21:06.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.486 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:06.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:06.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:21:06.486 00:21:06.486 --- 10.0.0.1 ping statistics --- 00:21:06.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:06.486 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # return 0 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:21:06.486 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:06.487 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:21:06.487 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@725 -- # xtrace_disable 00:21:06.487 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:06.487 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@485 -- # nvmfpid=1228690 00:21:06.487 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:06.487 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@486 -- # waitforlisten 1228690 00:21:06.487 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@832 -- # '[' -z 1228690 ']' 00:21:06.487 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.487 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local max_retries=100 00:21:06.487 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.487 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@841 -- # xtrace_disable 00:21:06.487 19:50:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:06.487 [2024-07-24 19:50:23.748726] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:21:06.487 [2024-07-24 19:50:23.748808] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.487 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.487 [2024-07-24 19:50:23.818770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:06.745 [2024-07-24 19:50:23.936962] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.745 [2024-07-24 19:50:23.937024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.745 [2024-07-24 19:50:23.937040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.745 [2024-07-24 19:50:23.937053] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.745 [2024-07-24 19:50:23.937065] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.745 [2024-07-24 19:50:23.937149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.745 [2024-07-24 19:50:23.937205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.745 [2024-07-24 19:50:23.937323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:06.745 [2024-07-24 19:50:23.937327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:07.309 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:21:07.309 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@865 -- # return 0 00:21:07.309 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:21:07.309 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@731 -- # xtrace_disable 00:21:07.309 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:07.567 [2024-07-24 19:50:24.712511] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:07.567 Malloc0 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:07.567 [2024-07-24 19:50:24.766192] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:07.567 [ 00:21:07.567 { 00:21:07.567 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:07.567 "subtype": "Discovery", 00:21:07.567 "listen_addresses": [], 00:21:07.567 "allow_any_host": true, 00:21:07.567 "hosts": [] 00:21:07.567 }, 00:21:07.567 { 00:21:07.567 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.567 "subtype": "NVMe", 00:21:07.567 "listen_addresses": [ 00:21:07.567 { 00:21:07.567 "trtype": "TCP", 00:21:07.567 "adrfam": "IPv4", 00:21:07.567 "traddr": "10.0.0.2", 00:21:07.567 "trsvcid": "4420" 00:21:07.567 } 00:21:07.567 ], 00:21:07.567 "allow_any_host": true, 00:21:07.567 "hosts": [], 00:21:07.567 "serial_number": "SPDK00000000000001", 00:21:07.567 "model_number": "SPDK bdev Controller", 00:21:07.567 "max_namespaces": 2, 00:21:07.567 "min_cntlid": 1, 00:21:07.567 "max_cntlid": 65519, 00:21:07.567 "namespaces": [ 00:21:07.567 { 00:21:07.567 "nsid": 1, 00:21:07.567 "bdev_name": "Malloc0", 00:21:07.567 "name": "Malloc0", 00:21:07.567 "nguid": "F746C7A41C0C42C39AA61A33D9D81882", 00:21:07.567 "uuid": "f746c7a4-1c0c-42c3-9aa6-1a33d9d81882" 00:21:07.567 } 00:21:07.567 ] 00:21:07.567 } 00:21:07.567 ] 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1228839 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # local i=0 00:21:07.567 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:07.568 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' 0 -lt 200 ']' 00:21:07.568 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # i=1 00:21:07.568 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # sleep 0.1 00:21:07.568 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.568 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:07.568 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' 1 -lt 200 ']' 00:21:07.568 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # i=2 00:21:07.568 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # sleep 0.1 00:21:07.825 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:07.825 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' 2 -lt 200 ']' 00:21:07.826 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # i=3 00:21:07.826 19:50:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # sleep 0.1 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1277 -- # return 0 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:07.826 Malloc1 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:07.826 [ 00:21:07.826 { 00:21:07.826 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:07.826 "subtype": "Discovery", 00:21:07.826 "listen_addresses": [], 00:21:07.826 "allow_any_host": true, 00:21:07.826 "hosts": [] 00:21:07.826 }, 00:21:07.826 { 00:21:07.826 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.826 "subtype": "NVMe", 00:21:07.826 "listen_addresses": [ 00:21:07.826 { 00:21:07.826 "trtype": "TCP", 00:21:07.826 "adrfam": "IPv4", 00:21:07.826 "traddr": "10.0.0.2", 00:21:07.826 "trsvcid": "4420" 00:21:07.826 } 00:21:07.826 ], 00:21:07.826 "allow_any_host": true, 00:21:07.826 "hosts": [], 00:21:07.826 "serial_number": "SPDK00000000000001", 00:21:07.826 "model_number": "SPDK bdev Controller", 00:21:07.826 "max_namespaces": 2, 00:21:07.826 "min_cntlid": 1, 00:21:07.826 "max_cntlid": 65519, 00:21:07.826 "namespaces": [ 00:21:07.826 { 00:21:07.826 "nsid": 1, 00:21:07.826 "bdev_name": "Malloc0", 00:21:07.826 "name": "Malloc0", 00:21:07.826 "nguid": "F746C7A41C0C42C39AA61A33D9D81882", 00:21:07.826 "uuid": "f746c7a4-1c0c-42c3-9aa6-1a33d9d81882" 00:21:07.826 }, 00:21:07.826 { 00:21:07.826 "nsid": 2, 00:21:07.826 "bdev_name": "Malloc1", 00:21:07.826 "name": "Malloc1", 00:21:07.826 "nguid": "5961A0CC3C764A5BB9509C0E56AD0A62", 00:21:07.826 "uuid": "5961a0cc-3c76-4a5b-b950-9c0e56ad0a62" 00:21:07.826 } 00:21:07.826 ] 00:21:07.826 } 00:21:07.826 ] 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1228839 00:21:07.826 Asynchronous Event Request test 00:21:07.826 Attaching to 10.0.0.2 00:21:07.826 Attached to 10.0.0.2 00:21:07.826 Registering asynchronous event callbacks... 00:21:07.826 Starting namespace attribute notice tests for all controllers... 00:21:07.826 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:07.826 aer_cb - Changed Namespace 00:21:07.826 Cleaning up... 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:07.826 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # nvmfcleanup 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:08.084 rmmod nvme_tcp 00:21:08.084 rmmod nvme_fabrics 00:21:08.084 rmmod nvme_keyring 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # '[' -n 1228690 ']' 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # killprocess 1228690 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@951 -- # '[' -z 1228690 ']' 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # kill -0 1228690 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # uname 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1228690 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1228690' 00:21:08.084 killing process with pid 1228690 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # kill 1228690 00:21:08.084 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@975 -- # wait 1228690 00:21:08.342 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:21:08.342 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:21:08.342 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:21:08.342 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:08.342 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@282 -- # remove_spdk_ns 00:21:08.342 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.342 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.342 19:50:25 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.879 19:50:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:21:10.879 00:21:10.879 real 0m6.030s 00:21:10.879 user 0m7.338s 00:21:10.879 sys 0m1.850s 00:21:10.879 19:50:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # xtrace_disable 00:21:10.879 19:50:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:10.879 ************************************ 00:21:10.879 END TEST nvmf_aer 00:21:10.879 ************************************ 00:21:10.879 19:50:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:10.879 19:50:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:21:10.879 19:50:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:21:10.879 19:50:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.879 ************************************ 00:21:10.879 START TEST nvmf_async_init 00:21:10.879 ************************************ 00:21:10.879 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:10.879 * Looking for test storage... 00:21:10.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:10.879 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.879 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:10.879 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.879 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.879 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:10.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=9207127759494d0790eecb4ee6934ca7 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@452 -- # prepare_net_devs 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # local -g is_hw=no 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # remove_spdk_ns 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # xtrace_disable 00:21:10.880 19:50:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # pci_devs=() 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -a pci_devs 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # pci_net_devs=() 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # pci_drivers=() 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -A pci_drivers 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # net_devs=() 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # local -ga net_devs 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # e810=() 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # local -ga e810 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # x722=() 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # local -ga x722 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # mlx=() 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # local -ga mlx 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:12.781 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:12.781 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # [[ up == up ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:12.781 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # [[ up == up ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:12.781 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # is_hw=yes 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.781 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:21:12.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:21:12.782 00:21:12.782 --- 10.0.0.2 ping statistics --- 00:21:12.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.782 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:21:12.782 00:21:12.782 --- 10.0.0.1 ping statistics --- 00:21:12.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.782 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # return 0 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@725 -- # xtrace_disable 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@485 -- # nvmfpid=1230892 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@486 -- # waitforlisten 1230892 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@832 -- # '[' -z 1230892 ']' 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local max_retries=100 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@841 -- # xtrace_disable 00:21:12.782 19:50:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:12.782 [2024-07-24 19:50:29.960580] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:21:12.782 [2024-07-24 19:50:29.960680] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.782 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.782 [2024-07-24 19:50:30.025435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.782 [2024-07-24 19:50:30.140339] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.782 [2024-07-24 19:50:30.140396] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.782 [2024-07-24 19:50:30.140412] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.782 [2024-07-24 19:50:30.140425] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.782 [2024-07-24 19:50:30.140437] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.782 [2024-07-24 19:50:30.140465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@865 -- # return 0 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@731 -- # xtrace_disable 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.715 [2024-07-24 19:50:30.930965] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.715 null0 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:13.715 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 9207127759494d0790eecb4ee6934ca7 00:21:13.716 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:13.716 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.716 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:13.716 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:13.716 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:13.716 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.716 [2024-07-24 19:50:30.971165] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.716 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:13.716 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:13.716 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:13.716 19:50:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.974 nvme0n1 00:21:13.974 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:13.974 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:13.974 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:13.974 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.974 [ 00:21:13.974 { 00:21:13.974 "name": "nvme0n1", 00:21:13.974 "aliases": [ 00:21:13.974 "92071277-5949-4d07-90ee-cb4ee6934ca7" 00:21:13.974 ], 00:21:13.974 "product_name": "NVMe disk", 00:21:13.974 "block_size": 512, 00:21:13.974 "num_blocks": 2097152, 00:21:13.974 "uuid": "92071277-5949-4d07-90ee-cb4ee6934ca7", 00:21:13.974 "assigned_rate_limits": { 00:21:13.974 "rw_ios_per_sec": 0, 00:21:13.974 "rw_mbytes_per_sec": 0, 00:21:13.974 "r_mbytes_per_sec": 0, 00:21:13.974 "w_mbytes_per_sec": 0 00:21:13.974 }, 00:21:13.974 "claimed": false, 00:21:13.974 "zoned": false, 00:21:13.974 "supported_io_types": { 00:21:13.974 "read": true, 00:21:13.974 "write": true, 00:21:13.974 "unmap": false, 00:21:13.974 "flush": true, 00:21:13.974 "reset": true, 00:21:13.974 "nvme_admin": true, 00:21:13.974 "nvme_io": true, 00:21:13.974 "nvme_io_md": false, 00:21:13.974 "write_zeroes": true, 00:21:13.974 "zcopy": false, 00:21:13.974 "get_zone_info": false, 00:21:13.974 "zone_management": false, 00:21:13.974 "zone_append": false, 00:21:13.974 "compare": true, 00:21:13.974 "compare_and_write": true, 00:21:13.974 "abort": true, 00:21:13.974 "seek_hole": false, 00:21:13.974 "seek_data": false, 00:21:13.974 "copy": true, 00:21:13.974 "nvme_iov_md": false 00:21:13.974 }, 00:21:13.974 "memory_domains": [ 00:21:13.974 { 00:21:13.974 "dma_device_id": "system", 00:21:13.974 "dma_device_type": 1 00:21:13.974 } 00:21:13.974 ], 00:21:13.974 "driver_specific": { 00:21:13.974 "nvme": [ 00:21:13.974 { 00:21:13.974 "trid": { 00:21:13.974 "trtype": "TCP", 00:21:13.974 "adrfam": "IPv4", 00:21:13.974 "traddr": "10.0.0.2", 00:21:13.974 "trsvcid": "4420", 00:21:13.974 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:13.974 }, 00:21:13.974 "ctrlr_data": { 00:21:13.974 "cntlid": 1, 00:21:13.974 "vendor_id": "0x8086", 00:21:13.974 "model_number": "SPDK bdev Controller", 00:21:13.974 "serial_number": "00000000000000000000", 00:21:13.974 "firmware_revision": "24.09", 00:21:13.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:13.974 "oacs": { 00:21:13.974 "security": 0, 00:21:13.974 "format": 0, 00:21:13.974 "firmware": 0, 00:21:13.974 "ns_manage": 0 00:21:13.974 }, 00:21:13.974 "multi_ctrlr": true, 00:21:13.974 "ana_reporting": false 00:21:13.974 }, 00:21:13.974 "vs": { 00:21:13.974 "nvme_version": "1.3" 00:21:13.974 }, 00:21:13.974 "ns_data": { 00:21:13.974 "id": 1, 00:21:13.974 "can_share": true 00:21:13.974 } 00:21:13.974 } 00:21:13.974 ], 00:21:13.974 "mp_policy": "active_passive" 00:21:13.974 } 00:21:13.974 } 00:21:13.974 ] 00:21:13.974 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:13.974 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:13.974 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:13.974 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:13.974 [2024-07-24 19:50:31.224219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:13.974 [2024-07-24 19:50:31.224338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22341d0 (9): Bad file descriptor 00:21:14.233 [2024-07-24 19:50:31.366381] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.233 [ 00:21:14.233 { 00:21:14.233 "name": "nvme0n1", 00:21:14.233 "aliases": [ 00:21:14.233 "92071277-5949-4d07-90ee-cb4ee6934ca7" 00:21:14.233 ], 00:21:14.233 "product_name": "NVMe disk", 00:21:14.233 "block_size": 512, 00:21:14.233 "num_blocks": 2097152, 00:21:14.233 "uuid": "92071277-5949-4d07-90ee-cb4ee6934ca7", 00:21:14.233 "assigned_rate_limits": { 00:21:14.233 "rw_ios_per_sec": 0, 00:21:14.233 "rw_mbytes_per_sec": 0, 00:21:14.233 "r_mbytes_per_sec": 0, 00:21:14.233 "w_mbytes_per_sec": 0 00:21:14.233 }, 00:21:14.233 "claimed": false, 00:21:14.233 "zoned": false, 00:21:14.233 "supported_io_types": { 00:21:14.233 "read": true, 00:21:14.233 "write": true, 00:21:14.233 "unmap": false, 00:21:14.233 "flush": true, 00:21:14.233 "reset": true, 00:21:14.233 "nvme_admin": true, 00:21:14.233 "nvme_io": true, 00:21:14.233 "nvme_io_md": false, 00:21:14.233 "write_zeroes": true, 00:21:14.233 "zcopy": false, 00:21:14.233 "get_zone_info": false, 00:21:14.233 "zone_management": false, 00:21:14.233 "zone_append": false, 00:21:14.233 "compare": true, 00:21:14.233 "compare_and_write": true, 00:21:14.233 "abort": true, 00:21:14.233 "seek_hole": false, 00:21:14.233 "seek_data": false, 00:21:14.233 "copy": true, 00:21:14.233 "nvme_iov_md": false 00:21:14.233 }, 00:21:14.233 "memory_domains": [ 00:21:14.233 { 00:21:14.233 "dma_device_id": "system", 00:21:14.233 "dma_device_type": 1 00:21:14.233 } 00:21:14.233 ], 00:21:14.233 "driver_specific": { 00:21:14.233 "nvme": [ 00:21:14.233 { 00:21:14.233 "trid": { 00:21:14.233 "trtype": "TCP", 00:21:14.233 "adrfam": "IPv4", 00:21:14.233 "traddr": "10.0.0.2", 00:21:14.233 "trsvcid": "4420", 00:21:14.233 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:14.233 }, 00:21:14.233 "ctrlr_data": { 00:21:14.233 "cntlid": 2, 00:21:14.233 "vendor_id": "0x8086", 00:21:14.233 "model_number": "SPDK bdev Controller", 00:21:14.233 "serial_number": "00000000000000000000", 00:21:14.233 "firmware_revision": "24.09", 00:21:14.233 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:14.233 "oacs": { 00:21:14.233 "security": 0, 00:21:14.233 "format": 0, 00:21:14.233 "firmware": 0, 00:21:14.233 "ns_manage": 0 00:21:14.233 }, 00:21:14.233 "multi_ctrlr": true, 00:21:14.233 "ana_reporting": false 00:21:14.233 }, 00:21:14.233 "vs": { 00:21:14.233 "nvme_version": "1.3" 00:21:14.233 }, 00:21:14.233 "ns_data": { 00:21:14.233 "id": 1, 00:21:14.233 "can_share": true 00:21:14.233 } 00:21:14.233 } 00:21:14.233 ], 00:21:14.233 "mp_policy": "active_passive" 00:21:14.233 } 00:21:14.233 } 00:21:14.233 ] 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.d9q2uW48yB 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.d9q2uW48yB 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.233 [2024-07-24 19:50:31.416981] tcp.c:1030:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.233 [2024-07-24 19:50:31.417168] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:14.233 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.d9q2uW48yB 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.234 [2024-07-24 19:50:31.424994] tcp.c:3812:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.d9q2uW48yB 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.234 [2024-07-24 19:50:31.433018] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.234 [2024-07-24 19:50:31.433088] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:14.234 nvme0n1 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.234 [ 00:21:14.234 { 00:21:14.234 "name": "nvme0n1", 00:21:14.234 "aliases": [ 00:21:14.234 "92071277-5949-4d07-90ee-cb4ee6934ca7" 00:21:14.234 ], 00:21:14.234 "product_name": "NVMe disk", 00:21:14.234 "block_size": 512, 00:21:14.234 "num_blocks": 2097152, 00:21:14.234 "uuid": "92071277-5949-4d07-90ee-cb4ee6934ca7", 00:21:14.234 "assigned_rate_limits": { 00:21:14.234 "rw_ios_per_sec": 0, 00:21:14.234 "rw_mbytes_per_sec": 0, 00:21:14.234 "r_mbytes_per_sec": 0, 00:21:14.234 "w_mbytes_per_sec": 0 00:21:14.234 }, 00:21:14.234 "claimed": false, 00:21:14.234 "zoned": false, 00:21:14.234 "supported_io_types": { 00:21:14.234 "read": true, 00:21:14.234 "write": true, 00:21:14.234 "unmap": false, 00:21:14.234 "flush": true, 00:21:14.234 "reset": true, 00:21:14.234 "nvme_admin": true, 00:21:14.234 "nvme_io": true, 00:21:14.234 "nvme_io_md": false, 00:21:14.234 "write_zeroes": true, 00:21:14.234 "zcopy": false, 00:21:14.234 "get_zone_info": false, 00:21:14.234 "zone_management": false, 00:21:14.234 "zone_append": false, 00:21:14.234 "compare": true, 00:21:14.234 "compare_and_write": true, 00:21:14.234 "abort": true, 00:21:14.234 "seek_hole": false, 00:21:14.234 "seek_data": false, 00:21:14.234 "copy": true, 00:21:14.234 "nvme_iov_md": false 00:21:14.234 }, 00:21:14.234 "memory_domains": [ 00:21:14.234 { 00:21:14.234 "dma_device_id": "system", 00:21:14.234 "dma_device_type": 1 00:21:14.234 } 00:21:14.234 ], 00:21:14.234 "driver_specific": { 00:21:14.234 "nvme": [ 00:21:14.234 { 00:21:14.234 "trid": { 00:21:14.234 "trtype": "TCP", 00:21:14.234 "adrfam": "IPv4", 00:21:14.234 "traddr": "10.0.0.2", 00:21:14.234 "trsvcid": "4421", 00:21:14.234 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:14.234 }, 00:21:14.234 "ctrlr_data": { 00:21:14.234 "cntlid": 3, 00:21:14.234 "vendor_id": "0x8086", 00:21:14.234 "model_number": "SPDK bdev Controller", 00:21:14.234 "serial_number": "00000000000000000000", 00:21:14.234 "firmware_revision": "24.09", 00:21:14.234 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:14.234 "oacs": { 00:21:14.234 "security": 0, 00:21:14.234 "format": 0, 00:21:14.234 "firmware": 0, 00:21:14.234 "ns_manage": 0 00:21:14.234 }, 00:21:14.234 "multi_ctrlr": true, 00:21:14.234 "ana_reporting": false 00:21:14.234 }, 00:21:14.234 "vs": { 00:21:14.234 "nvme_version": "1.3" 00:21:14.234 }, 00:21:14.234 "ns_data": { 00:21:14.234 "id": 1, 00:21:14.234 "can_share": true 00:21:14.234 } 00:21:14.234 } 00:21:14.234 ], 00:21:14.234 "mp_policy": "active_passive" 00:21:14.234 } 00:21:14.234 } 00:21:14.234 ] 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.d9q2uW48yB 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # nvmfcleanup 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:14.234 rmmod nvme_tcp 00:21:14.234 rmmod nvme_fabrics 00:21:14.234 rmmod nvme_keyring 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # '[' -n 1230892 ']' 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # killprocess 1230892 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' -z 1230892 ']' 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # kill -0 1230892 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # uname 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:21:14.234 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1230892 00:21:14.492 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:21:14.492 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:21:14.492 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1230892' 00:21:14.492 killing process with pid 1230892 00:21:14.492 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # kill 1230892 00:21:14.492 [2024-07-24 19:50:31.622810] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:14.492 [2024-07-24 19:50:31.622842] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:14.492 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@975 -- # wait 1230892 00:21:14.492 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:21:14.751 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:21:14.751 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:21:14.751 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.751 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@282 -- # remove_spdk_ns 00:21:14.751 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.751 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.751 19:50:31 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.652 19:50:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:21:16.652 00:21:16.652 real 0m6.211s 00:21:16.652 user 0m2.940s 00:21:16.652 sys 0m1.880s 00:21:16.652 19:50:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # xtrace_disable 00:21:16.652 19:50:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:16.652 ************************************ 00:21:16.652 END TEST nvmf_async_init 00:21:16.652 ************************************ 00:21:16.652 19:50:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:16.652 19:50:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:21:16.652 19:50:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:21:16.652 19:50:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.652 ************************************ 00:21:16.652 START TEST dma 00:21:16.652 ************************************ 00:21:16.652 19:50:33 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:16.652 * Looking for test storage... 00:21:16.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.652 19:50:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:16.653 19:50:34 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.653 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:16.653 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.653 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.653 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.653 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.653 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.653 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.653 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.653 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:16.912 00:21:16.912 real 0m0.066s 00:21:16.912 user 0m0.030s 00:21:16.912 sys 0m0.041s 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # xtrace_disable 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:16.912 ************************************ 00:21:16.912 END TEST dma 00:21:16.912 ************************************ 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.912 ************************************ 00:21:16.912 START TEST nvmf_identify 00:21:16.912 ************************************ 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:16.912 * Looking for test storage... 00:21:16.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.912 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:16.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@452 -- # prepare_net_devs 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # local -g is_hw=no 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # remove_spdk_ns 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # xtrace_disable 00:21:16.913 19:50:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # pci_devs=() 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -a pci_devs 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # pci_net_devs=() 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # pci_drivers=() 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -A pci_drivers 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # net_devs=() 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # local -ga net_devs 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # e810=() 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # local -ga e810 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # x722=() 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # local -ga x722 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # mlx=() 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # local -ga mlx 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:18.813 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:18.813 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # [[ up == up ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:18.813 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # [[ up == up ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:18.813 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # is_hw=yes 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:21:18.813 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:21:18.814 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:21:19.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:21:19.072 00:21:19.072 --- 10.0.0.2 ping statistics --- 00:21:19.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.072 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:21:19.072 00:21:19.072 --- 10.0.0.1 ping statistics --- 00:21:19.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.072 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # return 0 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@725 -- # xtrace_disable 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1233075 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1233075 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@832 -- # '[' -z 1233075 ']' 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local max_retries=100 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@841 -- # xtrace_disable 00:21:19.072 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:19.072 [2024-07-24 19:50:36.316028] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:21:19.072 [2024-07-24 19:50:36.316114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.072 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.072 [2024-07-24 19:50:36.383756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.329 [2024-07-24 19:50:36.502198] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.329 [2024-07-24 19:50:36.502275] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.329 [2024-07-24 19:50:36.502293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.329 [2024-07-24 19:50:36.502306] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.329 [2024-07-24 19:50:36.502317] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.329 [2024-07-24 19:50:36.502400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.329 [2024-07-24 19:50:36.502454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.329 [2024-07-24 19:50:36.502568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.329 [2024-07-24 19:50:36.502570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.329 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:21:19.329 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@865 -- # return 0 00:21:19.329 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:19.329 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:19.329 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:19.329 [2024-07-24 19:50:36.629343] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.329 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:19.329 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:19.329 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@731 -- # xtrace_disable 00:21:19.329 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:19.329 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:19.329 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:19.330 Malloc0 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:19.330 [2024-07-24 19:50:36.700220] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:19.330 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:19.589 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:19.589 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:19.589 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:19.589 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:19.589 [ 00:21:19.589 { 00:21:19.589 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:19.589 "subtype": "Discovery", 00:21:19.589 "listen_addresses": [ 00:21:19.589 { 00:21:19.589 "trtype": "TCP", 00:21:19.589 "adrfam": "IPv4", 00:21:19.589 "traddr": "10.0.0.2", 00:21:19.590 "trsvcid": "4420" 00:21:19.590 } 00:21:19.590 ], 00:21:19.590 "allow_any_host": true, 00:21:19.590 "hosts": [] 00:21:19.590 }, 00:21:19.590 { 00:21:19.590 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.590 "subtype": "NVMe", 00:21:19.590 "listen_addresses": [ 00:21:19.590 { 00:21:19.590 "trtype": "TCP", 00:21:19.590 "adrfam": "IPv4", 00:21:19.590 "traddr": "10.0.0.2", 00:21:19.590 "trsvcid": "4420" 00:21:19.590 } 00:21:19.590 ], 00:21:19.590 "allow_any_host": true, 00:21:19.590 "hosts": [], 00:21:19.590 "serial_number": "SPDK00000000000001", 00:21:19.590 "model_number": "SPDK bdev Controller", 00:21:19.590 "max_namespaces": 32, 00:21:19.590 "min_cntlid": 1, 00:21:19.590 "max_cntlid": 65519, 00:21:19.590 "namespaces": [ 00:21:19.590 { 00:21:19.590 "nsid": 1, 00:21:19.590 "bdev_name": "Malloc0", 00:21:19.590 "name": "Malloc0", 00:21:19.590 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:19.590 "eui64": "ABCDEF0123456789", 00:21:19.590 "uuid": "6df58504-d9a7-4e13-8628-278da1f0f2c2" 00:21:19.590 } 00:21:19.590 ] 00:21:19.590 } 00:21:19.590 ] 00:21:19.590 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:19.590 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:19.590 [2024-07-24 19:50:36.738393] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:21:19.590 [2024-07-24 19:50:36.738440] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233173 ] 00:21:19.590 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.590 [2024-07-24 19:50:36.772472] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:19.590 [2024-07-24 19:50:36.772556] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:19.590 [2024-07-24 19:50:36.772567] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:19.590 [2024-07-24 19:50:36.772582] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:19.590 [2024-07-24 19:50:36.772594] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:19.590 [2024-07-24 19:50:36.772838] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:19.590 [2024-07-24 19:50:36.772886] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e4f540 0 00:21:19.590 [2024-07-24 19:50:36.787256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:19.590 [2024-07-24 19:50:36.787282] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:19.590 [2024-07-24 19:50:36.787292] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:19.590 [2024-07-24 19:50:36.787298] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:19.590 [2024-07-24 19:50:36.787349] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.590 [2024-07-24 19:50:36.787362] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.590 [2024-07-24 19:50:36.787369] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4f540) 00:21:19.590 [2024-07-24 19:50:36.787387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:19.590 [2024-07-24 19:50:36.787413] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf3c0, cid 0, qid 0 00:21:19.590 [2024-07-24 19:50:36.795260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.590 [2024-07-24 19:50:36.795279] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.590 [2024-07-24 19:50:36.795287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.590 [2024-07-24 19:50:36.795294] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf3c0) on tqpair=0x1e4f540 00:21:19.590 [2024-07-24 19:50:36.795309] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:19.590 [2024-07-24 19:50:36.795323] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:19.590 [2024-07-24 19:50:36.795332] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:19.590 [2024-07-24 19:50:36.795357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.590 [2024-07-24 19:50:36.795365] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.590 [2024-07-24 19:50:36.795372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4f540) 00:21:19.590 [2024-07-24 19:50:36.795383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.590 [2024-07-24 19:50:36.795407] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf3c0, cid 0, qid 0 00:21:19.590 [2024-07-24 19:50:36.795576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.590 [2024-07-24 19:50:36.795593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.590 [2024-07-24 19:50:36.795603] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.590 [2024-07-24 19:50:36.795610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf3c0) on tqpair=0x1e4f540 00:21:19.590 [2024-07-24 19:50:36.795628] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:19.590 [2024-07-24 19:50:36.795644] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:19.590 [2024-07-24 19:50:36.795659] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.590 [2024-07-24 19:50:36.795667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.590 [2024-07-24 19:50:36.795673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4f540) 00:21:19.590 [2024-07-24 19:50:36.795684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.590 [2024-07-24 19:50:36.795706] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf3c0, cid 0, qid 0 00:21:19.590 [2024-07-24 19:50:36.795879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.590 [2024-07-24 19:50:36.795897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.590 [2024-07-24 19:50:36.795905] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.590 [2024-07-24 19:50:36.795912] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf3c0) on tqpair=0x1e4f540 00:21:19.590 [2024-07-24 19:50:36.795920] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:19.590 [2024-07-24 19:50:36.795935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:19.590 [2024-07-24 19:50:36.795951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.590 [2024-07-24 19:50:36.795959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.590 [2024-07-24 19:50:36.795966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4f540) 00:21:19.590 [2024-07-24 19:50:36.795976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.590 [2024-07-24 19:50:36.795998] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf3c0, cid 0, qid 0 00:21:19.590 [2024-07-24 19:50:36.796098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.590 [2024-07-24 19:50:36.796114] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.590 [2024-07-24 19:50:36.796124] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.590 [2024-07-24 19:50:36.796131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf3c0) on tqpair=0x1e4f540 00:21:19.590 [2024-07-24 19:50:36.796140] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:19.590 [2024-07-24 19:50:36.796157] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.590 [2024-07-24 19:50:36.796166] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.590 [2024-07-24 19:50:36.796175] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4f540) 00:21:19.590 [2024-07-24 19:50:36.796187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.590 [2024-07-24 19:50:36.796208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf3c0, cid 0, qid 0 00:21:19.590 [2024-07-24 19:50:36.796368] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.590 [2024-07-24 19:50:36.796386] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.590 [2024-07-24 19:50:36.796394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.590 [2024-07-24 19:50:36.796401] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf3c0) on tqpair=0x1e4f540 00:21:19.590 [2024-07-24 19:50:36.796409] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:19.590 [2024-07-24 19:50:36.796417] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:19.590 [2024-07-24 19:50:36.796436] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:19.590 [2024-07-24 19:50:36.796553] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:19.591 [2024-07-24 19:50:36.796562] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:19.591 [2024-07-24 19:50:36.796576] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.796583] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.796589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4f540) 00:21:19.591 [2024-07-24 19:50:36.796600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.591 [2024-07-24 19:50:36.796635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf3c0, cid 0, qid 0 00:21:19.591 [2024-07-24 19:50:36.796786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.591 [2024-07-24 19:50:36.796802] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.591 [2024-07-24 19:50:36.796812] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.796819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf3c0) on tqpair=0x1e4f540 00:21:19.591 [2024-07-24 19:50:36.796827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:19.591 [2024-07-24 19:50:36.796844] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.796853] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.796863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4f540) 00:21:19.591 [2024-07-24 19:50:36.796874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.591 [2024-07-24 19:50:36.796895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf3c0, cid 0, qid 0 00:21:19.591 [2024-07-24 19:50:36.797045] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.591 [2024-07-24 19:50:36.797061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.591 [2024-07-24 19:50:36.797070] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf3c0) on tqpair=0x1e4f540 00:21:19.591 [2024-07-24 19:50:36.797086] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:19.591 [2024-07-24 19:50:36.797094] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:19.591 [2024-07-24 19:50:36.797108] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:19.591 [2024-07-24 19:50:36.797125] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:19.591 [2024-07-24 19:50:36.797142] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797150] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4f540) 00:21:19.591 [2024-07-24 19:50:36.797177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.591 [2024-07-24 19:50:36.797198] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf3c0, cid 0, qid 0 00:21:19.591 [2024-07-24 19:50:36.797355] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:19.591 [2024-07-24 19:50:36.797378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:19.591 [2024-07-24 19:50:36.797392] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797403] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e4f540): datao=0, datal=4096, cccid=0 00:21:19.591 [2024-07-24 19:50:36.797417] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eaf3c0) on tqpair(0x1e4f540): expected_datao=0, payload_size=4096 00:21:19.591 [2024-07-24 19:50:36.797427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797446] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797460] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797490] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.591 [2024-07-24 19:50:36.797505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.591 [2024-07-24 19:50:36.797511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797518] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf3c0) on tqpair=0x1e4f540 00:21:19.591 [2024-07-24 19:50:36.797544] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:19.591 [2024-07-24 19:50:36.797553] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:19.591 [2024-07-24 19:50:36.797560] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:19.591 [2024-07-24 19:50:36.797569] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:19.591 [2024-07-24 19:50:36.797577] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:19.591 [2024-07-24 19:50:36.797585] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:19.591 [2024-07-24 19:50:36.797600] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:19.591 [2024-07-24 19:50:36.797619] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797628] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4f540) 00:21:19.591 [2024-07-24 19:50:36.797645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.591 [2024-07-24 19:50:36.797666] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf3c0, cid 0, qid 0 00:21:19.591 [2024-07-24 19:50:36.797826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.591 [2024-07-24 19:50:36.797841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.591 [2024-07-24 19:50:36.797848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf3c0) on tqpair=0x1e4f540 00:21:19.591 [2024-07-24 19:50:36.797871] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e4f540) 00:21:19.591 [2024-07-24 19:50:36.797894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.591 [2024-07-24 19:50:36.797904] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797911] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e4f540) 00:21:19.591 [2024-07-24 19:50:36.797930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.591 [2024-07-24 19:50:36.797940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797947] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797953] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e4f540) 00:21:19.591 [2024-07-24 19:50:36.797962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.591 [2024-07-24 19:50:36.797972] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.797994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.798000] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e4f540) 00:21:19.591 [2024-07-24 19:50:36.798008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.591 [2024-07-24 19:50:36.798017] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:19.591 [2024-07-24 19:50:36.798038] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:19.591 [2024-07-24 19:50:36.798056] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.798064] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e4f540) 00:21:19.591 [2024-07-24 19:50:36.798074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.591 [2024-07-24 19:50:36.798096] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf3c0, cid 0, qid 0 00:21:19.591 [2024-07-24 19:50:36.798122] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf540, cid 1, qid 0 00:21:19.591 [2024-07-24 19:50:36.798129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf6c0, cid 2, qid 0 00:21:19.591 [2024-07-24 19:50:36.798136] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf840, cid 3, qid 0 00:21:19.591 [2024-07-24 19:50:36.798143] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf9c0, cid 4, qid 0 00:21:19.591 [2024-07-24 19:50:36.798319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.591 [2024-07-24 19:50:36.798335] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.591 [2024-07-24 19:50:36.798342] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.798349] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf9c0) on tqpair=0x1e4f540 00:21:19.591 [2024-07-24 19:50:36.798357] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:19.591 [2024-07-24 19:50:36.798366] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:19.591 [2024-07-24 19:50:36.798386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.591 [2024-07-24 19:50:36.798397] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e4f540) 00:21:19.591 [2024-07-24 19:50:36.798408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.591 [2024-07-24 19:50:36.798430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf9c0, cid 4, qid 0 00:21:19.591 [2024-07-24 19:50:36.798555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:19.591 [2024-07-24 19:50:36.798573] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:19.591 [2024-07-24 19:50:36.798581] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.798590] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e4f540): datao=0, datal=4096, cccid=4 00:21:19.592 [2024-07-24 19:50:36.798607] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eaf9c0) on tqpair(0x1e4f540): expected_datao=0, payload_size=4096 00:21:19.592 [2024-07-24 19:50:36.798620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.798636] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.798644] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.798656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.592 [2024-07-24 19:50:36.798666] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.592 [2024-07-24 19:50:36.798672] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.798679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf9c0) on tqpair=0x1e4f540 00:21:19.592 [2024-07-24 19:50:36.798699] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:19.592 [2024-07-24 19:50:36.798737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.798748] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e4f540) 00:21:19.592 [2024-07-24 19:50:36.798759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.592 [2024-07-24 19:50:36.798770] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.798777] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.798783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e4f540) 00:21:19.592 [2024-07-24 19:50:36.798792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.592 [2024-07-24 19:50:36.798833] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf9c0, cid 4, qid 0 00:21:19.592 [2024-07-24 19:50:36.798845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eafb40, cid 5, qid 0 00:21:19.592 [2024-07-24 19:50:36.799017] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:19.592 [2024-07-24 19:50:36.799037] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:19.592 [2024-07-24 19:50:36.799044] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.799051] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e4f540): datao=0, datal=1024, cccid=4 00:21:19.592 [2024-07-24 19:50:36.799060] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eaf9c0) on tqpair(0x1e4f540): expected_datao=0, payload_size=1024 00:21:19.592 [2024-07-24 19:50:36.799069] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.799080] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.799087] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.799096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.592 [2024-07-24 19:50:36.799105] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.592 [2024-07-24 19:50:36.799111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.799118] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eafb40) on tqpair=0x1e4f540 00:21:19.592 [2024-07-24 19:50:36.839353] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.592 [2024-07-24 19:50:36.839373] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.592 [2024-07-24 19:50:36.839381] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.839388] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf9c0) on tqpair=0x1e4f540 00:21:19.592 [2024-07-24 19:50:36.839406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.839416] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e4f540) 00:21:19.592 [2024-07-24 19:50:36.839432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.592 [2024-07-24 19:50:36.839466] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf9c0, cid 4, qid 0 00:21:19.592 [2024-07-24 19:50:36.839606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:19.592 [2024-07-24 19:50:36.839623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:19.592 [2024-07-24 19:50:36.839630] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.839636] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e4f540): datao=0, datal=3072, cccid=4 00:21:19.592 [2024-07-24 19:50:36.839646] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eaf9c0) on tqpair(0x1e4f540): expected_datao=0, payload_size=3072 00:21:19.592 [2024-07-24 19:50:36.839658] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.839674] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.839686] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.839700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.592 [2024-07-24 19:50:36.839709] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.592 [2024-07-24 19:50:36.839716] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.839723] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf9c0) on tqpair=0x1e4f540 00:21:19.592 [2024-07-24 19:50:36.839753] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.839762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e4f540) 00:21:19.592 [2024-07-24 19:50:36.839773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.592 [2024-07-24 19:50:36.839803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf9c0, cid 4, qid 0 00:21:19.592 [2024-07-24 19:50:36.839956] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:19.592 [2024-07-24 19:50:36.839986] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:19.592 [2024-07-24 19:50:36.839993] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.839999] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e4f540): datao=0, datal=8, cccid=4 00:21:19.592 [2024-07-24 19:50:36.840006] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1eaf9c0) on tqpair(0x1e4f540): expected_datao=0, payload_size=8 00:21:19.592 [2024-07-24 19:50:36.840014] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.840024] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.840031] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.880348] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.592 [2024-07-24 19:50:36.880369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.592 [2024-07-24 19:50:36.880379] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.592 [2024-07-24 19:50:36.880386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf9c0) on tqpair=0x1e4f540 00:21:19.592 ===================================================== 00:21:19.592 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:19.592 ===================================================== 00:21:19.592 Controller Capabilities/Features 00:21:19.592 ================================ 00:21:19.592 Vendor ID: 0000 00:21:19.592 Subsystem Vendor ID: 0000 00:21:19.592 Serial Number: .................... 00:21:19.592 Model Number: ........................................ 00:21:19.592 Firmware Version: 24.09 00:21:19.592 Recommended Arb Burst: 0 00:21:19.592 IEEE OUI Identifier: 00 00 00 00:21:19.592 Multi-path I/O 00:21:19.592 May have multiple subsystem ports: No 00:21:19.592 May have multiple controllers: No 00:21:19.592 Associated with SR-IOV VF: No 00:21:19.592 Max Data Transfer Size: 131072 00:21:19.592 Max Number of Namespaces: 0 00:21:19.592 Max Number of I/O Queues: 1024 00:21:19.592 NVMe Specification Version (VS): 1.3 00:21:19.592 NVMe Specification Version (Identify): 1.3 00:21:19.592 Maximum Queue Entries: 128 00:21:19.592 Contiguous Queues Required: Yes 00:21:19.592 Arbitration Mechanisms Supported 00:21:19.592 Weighted Round Robin: Not Supported 00:21:19.592 Vendor Specific: Not Supported 00:21:19.592 Reset Timeout: 15000 ms 00:21:19.592 Doorbell Stride: 4 bytes 00:21:19.592 NVM Subsystem Reset: Not Supported 00:21:19.592 Command Sets Supported 00:21:19.592 NVM Command Set: Supported 00:21:19.592 Boot Partition: Not Supported 00:21:19.592 Memory Page Size Minimum: 4096 bytes 00:21:19.592 Memory Page Size Maximum: 4096 bytes 00:21:19.592 Persistent Memory Region: Not Supported 00:21:19.592 Optional Asynchronous Events Supported 00:21:19.592 Namespace Attribute Notices: Not Supported 00:21:19.592 Firmware Activation Notices: Not Supported 00:21:19.592 ANA Change Notices: Not Supported 00:21:19.592 PLE Aggregate Log Change Notices: Not Supported 00:21:19.592 LBA Status Info Alert Notices: Not Supported 00:21:19.592 EGE Aggregate Log Change Notices: Not Supported 00:21:19.592 Normal NVM Subsystem Shutdown event: Not Supported 00:21:19.592 Zone Descriptor Change Notices: Not Supported 00:21:19.592 Discovery Log Change Notices: Supported 00:21:19.592 Controller Attributes 00:21:19.592 128-bit Host Identifier: Not Supported 00:21:19.593 Non-Operational Permissive Mode: Not Supported 00:21:19.593 NVM Sets: Not Supported 00:21:19.593 Read Recovery Levels: Not Supported 00:21:19.593 Endurance Groups: Not Supported 00:21:19.593 Predictable Latency Mode: Not Supported 00:21:19.593 Traffic Based Keep ALive: Not Supported 00:21:19.593 Namespace Granularity: Not Supported 00:21:19.593 SQ Associations: Not Supported 00:21:19.593 UUID List: Not Supported 00:21:19.593 Multi-Domain Subsystem: Not Supported 00:21:19.593 Fixed Capacity Management: Not Supported 00:21:19.593 Variable Capacity Management: Not Supported 00:21:19.593 Delete Endurance Group: Not Supported 00:21:19.593 Delete NVM Set: Not Supported 00:21:19.593 Extended LBA Formats Supported: Not Supported 00:21:19.593 Flexible Data Placement Supported: Not Supported 00:21:19.593 00:21:19.593 Controller Memory Buffer Support 00:21:19.593 ================================ 00:21:19.593 Supported: No 00:21:19.593 00:21:19.593 Persistent Memory Region Support 00:21:19.593 ================================ 00:21:19.593 Supported: No 00:21:19.593 00:21:19.593 Admin Command Set Attributes 00:21:19.593 ============================ 00:21:19.593 Security Send/Receive: Not Supported 00:21:19.593 Format NVM: Not Supported 00:21:19.593 Firmware Activate/Download: Not Supported 00:21:19.593 Namespace Management: Not Supported 00:21:19.593 Device Self-Test: Not Supported 00:21:19.593 Directives: Not Supported 00:21:19.593 NVMe-MI: Not Supported 00:21:19.593 Virtualization Management: Not Supported 00:21:19.593 Doorbell Buffer Config: Not Supported 00:21:19.593 Get LBA Status Capability: Not Supported 00:21:19.593 Command & Feature Lockdown Capability: Not Supported 00:21:19.593 Abort Command Limit: 1 00:21:19.593 Async Event Request Limit: 4 00:21:19.593 Number of Firmware Slots: N/A 00:21:19.593 Firmware Slot 1 Read-Only: N/A 00:21:19.593 Firmware Activation Without Reset: N/A 00:21:19.593 Multiple Update Detection Support: N/A 00:21:19.593 Firmware Update Granularity: No Information Provided 00:21:19.593 Per-Namespace SMART Log: No 00:21:19.593 Asymmetric Namespace Access Log Page: Not Supported 00:21:19.593 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:19.593 Command Effects Log Page: Not Supported 00:21:19.593 Get Log Page Extended Data: Supported 00:21:19.593 Telemetry Log Pages: Not Supported 00:21:19.593 Persistent Event Log Pages: Not Supported 00:21:19.593 Supported Log Pages Log Page: May Support 00:21:19.593 Commands Supported & Effects Log Page: Not Supported 00:21:19.593 Feature Identifiers & Effects Log Page:May Support 00:21:19.593 NVMe-MI Commands & Effects Log Page: May Support 00:21:19.593 Data Area 4 for Telemetry Log: Not Supported 00:21:19.593 Error Log Page Entries Supported: 128 00:21:19.593 Keep Alive: Not Supported 00:21:19.593 00:21:19.593 NVM Command Set Attributes 00:21:19.593 ========================== 00:21:19.593 Submission Queue Entry Size 00:21:19.593 Max: 1 00:21:19.593 Min: 1 00:21:19.593 Completion Queue Entry Size 00:21:19.593 Max: 1 00:21:19.593 Min: 1 00:21:19.593 Number of Namespaces: 0 00:21:19.593 Compare Command: Not Supported 00:21:19.593 Write Uncorrectable Command: Not Supported 00:21:19.593 Dataset Management Command: Not Supported 00:21:19.593 Write Zeroes Command: Not Supported 00:21:19.593 Set Features Save Field: Not Supported 00:21:19.593 Reservations: Not Supported 00:21:19.593 Timestamp: Not Supported 00:21:19.593 Copy: Not Supported 00:21:19.593 Volatile Write Cache: Not Present 00:21:19.593 Atomic Write Unit (Normal): 1 00:21:19.593 Atomic Write Unit (PFail): 1 00:21:19.593 Atomic Compare & Write Unit: 1 00:21:19.593 Fused Compare & Write: Supported 00:21:19.593 Scatter-Gather List 00:21:19.593 SGL Command Set: Supported 00:21:19.593 SGL Keyed: Supported 00:21:19.593 SGL Bit Bucket Descriptor: Not Supported 00:21:19.593 SGL Metadata Pointer: Not Supported 00:21:19.593 Oversized SGL: Not Supported 00:21:19.593 SGL Metadata Address: Not Supported 00:21:19.593 SGL Offset: Supported 00:21:19.593 Transport SGL Data Block: Not Supported 00:21:19.593 Replay Protected Memory Block: Not Supported 00:21:19.593 00:21:19.593 Firmware Slot Information 00:21:19.593 ========================= 00:21:19.593 Active slot: 0 00:21:19.593 00:21:19.593 00:21:19.593 Error Log 00:21:19.593 ========= 00:21:19.593 00:21:19.593 Active Namespaces 00:21:19.593 ================= 00:21:19.593 Discovery Log Page 00:21:19.593 ================== 00:21:19.593 Generation Counter: 2 00:21:19.593 Number of Records: 2 00:21:19.593 Record Format: 0 00:21:19.593 00:21:19.593 Discovery Log Entry 0 00:21:19.593 ---------------------- 00:21:19.593 Transport Type: 3 (TCP) 00:21:19.593 Address Family: 1 (IPv4) 00:21:19.593 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:19.593 Entry Flags: 00:21:19.593 Duplicate Returned Information: 1 00:21:19.593 Explicit Persistent Connection Support for Discovery: 1 00:21:19.593 Transport Requirements: 00:21:19.593 Secure Channel: Not Required 00:21:19.593 Port ID: 0 (0x0000) 00:21:19.593 Controller ID: 65535 (0xffff) 00:21:19.593 Admin Max SQ Size: 128 00:21:19.593 Transport Service Identifier: 4420 00:21:19.593 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:19.593 Transport Address: 10.0.0.2 00:21:19.593 Discovery Log Entry 1 00:21:19.593 ---------------------- 00:21:19.593 Transport Type: 3 (TCP) 00:21:19.593 Address Family: 1 (IPv4) 00:21:19.593 Subsystem Type: 2 (NVM Subsystem) 00:21:19.593 Entry Flags: 00:21:19.593 Duplicate Returned Information: 0 00:21:19.593 Explicit Persistent Connection Support for Discovery: 0 00:21:19.593 Transport Requirements: 00:21:19.593 Secure Channel: Not Required 00:21:19.593 Port ID: 0 (0x0000) 00:21:19.593 Controller ID: 65535 (0xffff) 00:21:19.593 Admin Max SQ Size: 128 00:21:19.593 Transport Service Identifier: 4420 00:21:19.593 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:19.593 Transport Address: 10.0.0.2 [2024-07-24 19:50:36.880503] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:19.593 [2024-07-24 19:50:36.880536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf3c0) on tqpair=0x1e4f540 00:21:19.593 [2024-07-24 19:50:36.880551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.593 [2024-07-24 19:50:36.880560] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf540) on tqpair=0x1e4f540 00:21:19.593 [2024-07-24 19:50:36.880568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.593 [2024-07-24 19:50:36.880579] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf6c0) on tqpair=0x1e4f540 00:21:19.593 [2024-07-24 19:50:36.880587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.593 [2024-07-24 19:50:36.880595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf840) on tqpair=0x1e4f540 00:21:19.593 [2024-07-24 19:50:36.880617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.593 [2024-07-24 19:50:36.880635] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.593 [2024-07-24 19:50:36.880643] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.593 [2024-07-24 19:50:36.880650] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e4f540) 00:21:19.593 [2024-07-24 19:50:36.880661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.593 [2024-07-24 19:50:36.880698] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf840, cid 3, qid 0 00:21:19.593 [2024-07-24 19:50:36.880820] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.593 [2024-07-24 19:50:36.880838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.593 [2024-07-24 19:50:36.880845] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.593 [2024-07-24 19:50:36.880852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf840) on tqpair=0x1e4f540 00:21:19.593 [2024-07-24 19:50:36.880864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.593 [2024-07-24 19:50:36.880872] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.880878] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e4f540) 00:21:19.594 [2024-07-24 19:50:36.880889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.594 [2024-07-24 19:50:36.880918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf840, cid 3, qid 0 00:21:19.594 [2024-07-24 19:50:36.881033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.594 [2024-07-24 19:50:36.881048] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.594 [2024-07-24 19:50:36.881059] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.881066] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf840) on tqpair=0x1e4f540 00:21:19.594 [2024-07-24 19:50:36.881074] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:19.594 [2024-07-24 19:50:36.881082] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:19.594 [2024-07-24 19:50:36.881099] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.881111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.881118] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e4f540) 00:21:19.594 [2024-07-24 19:50:36.881128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.594 [2024-07-24 19:50:36.881150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf840, cid 3, qid 0 00:21:19.594 [2024-07-24 19:50:36.885255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.594 [2024-07-24 19:50:36.885272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.594 [2024-07-24 19:50:36.885279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.885285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf840) on tqpair=0x1e4f540 00:21:19.594 [2024-07-24 19:50:36.885304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.885316] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.885322] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e4f540) 00:21:19.594 [2024-07-24 19:50:36.885337] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.594 [2024-07-24 19:50:36.885360] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1eaf840, cid 3, qid 0 00:21:19.594 [2024-07-24 19:50:36.885480] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.594 [2024-07-24 19:50:36.885496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.594 [2024-07-24 19:50:36.885505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.885513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1eaf840) on tqpair=0x1e4f540 00:21:19.594 [2024-07-24 19:50:36.885527] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:21:19.594 00:21:19.594 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:19.594 [2024-07-24 19:50:36.917696] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:21:19.594 [2024-07-24 19:50:36.917740] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233178 ] 00:21:19.594 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.594 [2024-07-24 19:50:36.950921] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:19.594 [2024-07-24 19:50:36.950969] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:19.594 [2024-07-24 19:50:36.950979] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:19.594 [2024-07-24 19:50:36.950992] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:19.594 [2024-07-24 19:50:36.951004] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:19.594 [2024-07-24 19:50:36.951269] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:19.594 [2024-07-24 19:50:36.951311] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13a3540 0 00:21:19.594 [2024-07-24 19:50:36.956293] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:19.594 [2024-07-24 19:50:36.956316] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:19.594 [2024-07-24 19:50:36.956324] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:19.594 [2024-07-24 19:50:36.956330] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:19.594 [2024-07-24 19:50:36.956368] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.956380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.956386] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a3540) 00:21:19.594 [2024-07-24 19:50:36.956400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:19.594 [2024-07-24 19:50:36.956426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 0, qid 0 00:21:19.594 [2024-07-24 19:50:36.965256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.594 [2024-07-24 19:50:36.965273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.594 [2024-07-24 19:50:36.965281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.965288] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a3540 00:21:19.594 [2024-07-24 19:50:36.965313] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:19.594 [2024-07-24 19:50:36.965325] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:19.594 [2024-07-24 19:50:36.965335] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:19.594 [2024-07-24 19:50:36.965353] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.965361] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.965368] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a3540) 00:21:19.594 [2024-07-24 19:50:36.965379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.594 [2024-07-24 19:50:36.965403] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 0, qid 0 00:21:19.594 [2024-07-24 19:50:36.965556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.594 [2024-07-24 19:50:36.965571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.594 [2024-07-24 19:50:36.965578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.965585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a3540 00:21:19.594 [2024-07-24 19:50:36.965597] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:19.594 [2024-07-24 19:50:36.965611] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:19.594 [2024-07-24 19:50:36.965623] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.965631] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.965637] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a3540) 00:21:19.594 [2024-07-24 19:50:36.965648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.594 [2024-07-24 19:50:36.965670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 0, qid 0 00:21:19.594 [2024-07-24 19:50:36.965769] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.594 [2024-07-24 19:50:36.965781] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.594 [2024-07-24 19:50:36.965788] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.965795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a3540 00:21:19.594 [2024-07-24 19:50:36.965803] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:19.594 [2024-07-24 19:50:36.965816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:19.594 [2024-07-24 19:50:36.965829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.965836] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.594 [2024-07-24 19:50:36.965842] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a3540) 00:21:19.594 [2024-07-24 19:50:36.965853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.595 [2024-07-24 19:50:36.965873] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 0, qid 0 00:21:19.595 [2024-07-24 19:50:36.965977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.595 [2024-07-24 19:50:36.965989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.595 [2024-07-24 19:50:36.965996] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.595 [2024-07-24 19:50:36.966002] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a3540 00:21:19.595 [2024-07-24 19:50:36.966011] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:19.595 [2024-07-24 19:50:36.966031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.595 [2024-07-24 19:50:36.966041] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.595 [2024-07-24 19:50:36.966048] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a3540) 00:21:19.595 [2024-07-24 19:50:36.966058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.595 [2024-07-24 19:50:36.966079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 0, qid 0 00:21:19.595 [2024-07-24 19:50:36.966172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.595 [2024-07-24 19:50:36.966184] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.595 [2024-07-24 19:50:36.966191] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.595 [2024-07-24 19:50:36.966198] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a3540 00:21:19.595 [2024-07-24 19:50:36.966205] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:19.595 [2024-07-24 19:50:36.966213] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:19.595 [2024-07-24 19:50:36.966226] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:19.595 [2024-07-24 19:50:36.966336] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:19.595 [2024-07-24 19:50:36.966346] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:19.595 [2024-07-24 19:50:36.966358] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.595 [2024-07-24 19:50:36.966366] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.595 [2024-07-24 19:50:36.966372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a3540) 00:21:19.595 [2024-07-24 19:50:36.966382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.595 [2024-07-24 19:50:36.966404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 0, qid 0 00:21:19.855 [2024-07-24 19:50:36.966549] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.855 [2024-07-24 19:50:36.966562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.855 [2024-07-24 19:50:36.966568] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.855 [2024-07-24 19:50:36.966575] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a3540 00:21:19.855 [2024-07-24 19:50:36.966583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:19.855 [2024-07-24 19:50:36.966599] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.855 [2024-07-24 19:50:36.966608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.855 [2024-07-24 19:50:36.966615] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a3540) 00:21:19.855 [2024-07-24 19:50:36.966625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.855 [2024-07-24 19:50:36.966646] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 0, qid 0 00:21:19.855 [2024-07-24 19:50:36.966772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.855 [2024-07-24 19:50:36.966787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.855 [2024-07-24 19:50:36.966793] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.855 [2024-07-24 19:50:36.966800] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a3540 00:21:19.856 [2024-07-24 19:50:36.966811] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:19.856 [2024-07-24 19:50:36.966819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:19.856 [2024-07-24 19:50:36.966833] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:19.856 [2024-07-24 19:50:36.966847] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:19.856 [2024-07-24 19:50:36.966861] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.966869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a3540) 00:21:19.856 [2024-07-24 19:50:36.966880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-07-24 19:50:36.966901] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 0, qid 0 00:21:19.856 [2024-07-24 19:50:36.967053] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:19.856 [2024-07-24 19:50:36.967066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:19.856 [2024-07-24 19:50:36.967073] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967079] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a3540): datao=0, datal=4096, cccid=0 00:21:19.856 [2024-07-24 19:50:36.967086] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14033c0) on tqpair(0x13a3540): expected_datao=0, payload_size=4096 00:21:19.856 [2024-07-24 19:50:36.967094] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967104] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967112] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.856 [2024-07-24 19:50:36.967133] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.856 [2024-07-24 19:50:36.967139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a3540 00:21:19.856 [2024-07-24 19:50:36.967156] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:19.856 [2024-07-24 19:50:36.967165] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:19.856 [2024-07-24 19:50:36.967172] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:19.856 [2024-07-24 19:50:36.967179] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:19.856 [2024-07-24 19:50:36.967186] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:19.856 [2024-07-24 19:50:36.967194] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:19.856 [2024-07-24 19:50:36.967208] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:19.856 [2024-07-24 19:50:36.967225] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967233] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967240] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a3540) 00:21:19.856 [2024-07-24 19:50:36.967260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.856 [2024-07-24 19:50:36.967282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 0, qid 0 00:21:19.856 [2024-07-24 19:50:36.967380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.856 [2024-07-24 19:50:36.967393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.856 [2024-07-24 19:50:36.967400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967406] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a3540 00:21:19.856 [2024-07-24 19:50:36.967416] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967423] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13a3540) 00:21:19.856 [2024-07-24 19:50:36.967439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.856 [2024-07-24 19:50:36.967449] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967461] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13a3540) 00:21:19.856 [2024-07-24 19:50:36.967470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.856 [2024-07-24 19:50:36.967479] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967486] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967492] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13a3540) 00:21:19.856 [2024-07-24 19:50:36.967501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.856 [2024-07-24 19:50:36.967510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967517] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967523] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a3540) 00:21:19.856 [2024-07-24 19:50:36.967531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.856 [2024-07-24 19:50:36.967540] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:19.856 [2024-07-24 19:50:36.967558] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:19.856 [2024-07-24 19:50:36.967571] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967578] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a3540) 00:21:19.856 [2024-07-24 19:50:36.967589] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-07-24 19:50:36.967611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14033c0, cid 0, qid 0 00:21:19.856 [2024-07-24 19:50:36.967622] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403540, cid 1, qid 0 00:21:19.856 [2024-07-24 19:50:36.967630] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14036c0, cid 2, qid 0 00:21:19.856 [2024-07-24 19:50:36.967637] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403840, cid 3, qid 0 00:21:19.856 [2024-07-24 19:50:36.967645] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14039c0, cid 4, qid 0 00:21:19.856 [2024-07-24 19:50:36.967786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.856 [2024-07-24 19:50:36.967798] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.856 [2024-07-24 19:50:36.967805] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967811] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14039c0) on tqpair=0x13a3540 00:21:19.856 [2024-07-24 19:50:36.967819] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:19.856 [2024-07-24 19:50:36.967831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:19.856 [2024-07-24 19:50:36.967849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:19.856 [2024-07-24 19:50:36.967861] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:19.856 [2024-07-24 19:50:36.967872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.967885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a3540) 00:21:19.856 [2024-07-24 19:50:36.967896] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:19.856 [2024-07-24 19:50:36.967916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14039c0, cid 4, qid 0 00:21:19.856 [2024-07-24 19:50:36.968061] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.856 [2024-07-24 19:50:36.968073] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.856 [2024-07-24 19:50:36.968080] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.968086] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14039c0) on tqpair=0x13a3540 00:21:19.856 [2024-07-24 19:50:36.968153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:19.856 [2024-07-24 19:50:36.968174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:19.856 [2024-07-24 19:50:36.968188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.856 [2024-07-24 19:50:36.968195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a3540) 00:21:19.856 [2024-07-24 19:50:36.968206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.856 [2024-07-24 19:50:36.968250] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14039c0, cid 4, qid 0 00:21:19.856 [2024-07-24 19:50:36.968375] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:19.856 [2024-07-24 19:50:36.968391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:19.856 [2024-07-24 19:50:36.968398] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.968404] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a3540): datao=0, datal=4096, cccid=4 00:21:19.857 [2024-07-24 19:50:36.968411] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14039c0) on tqpair(0x13a3540): expected_datao=0, payload_size=4096 00:21:19.857 [2024-07-24 19:50:36.968419] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.968449] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.968459] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.968552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.857 [2024-07-24 19:50:36.968567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.857 [2024-07-24 19:50:36.968573] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.968580] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14039c0) on tqpair=0x13a3540 00:21:19.857 [2024-07-24 19:50:36.968595] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:19.857 [2024-07-24 19:50:36.968612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:19.857 [2024-07-24 19:50:36.968632] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:19.857 [2024-07-24 19:50:36.968647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.968654] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a3540) 00:21:19.857 [2024-07-24 19:50:36.968665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-07-24 19:50:36.968686] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14039c0, cid 4, qid 0 00:21:19.857 [2024-07-24 19:50:36.968802] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:19.857 [2024-07-24 19:50:36.968817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:19.857 [2024-07-24 19:50:36.968823] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.968830] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a3540): datao=0, datal=4096, cccid=4 00:21:19.857 [2024-07-24 19:50:36.968837] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14039c0) on tqpair(0x13a3540): expected_datao=0, payload_size=4096 00:21:19.857 [2024-07-24 19:50:36.968844] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.968861] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.968870] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.968896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.857 [2024-07-24 19:50:36.968907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.857 [2024-07-24 19:50:36.968913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.968920] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14039c0) on tqpair=0x13a3540 00:21:19.857 [2024-07-24 19:50:36.968942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:19.857 [2024-07-24 19:50:36.968961] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:19.857 [2024-07-24 19:50:36.968975] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.968983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a3540) 00:21:19.857 [2024-07-24 19:50:36.968993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-07-24 19:50:36.969015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14039c0, cid 4, qid 0 00:21:19.857 [2024-07-24 19:50:36.969123] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:19.857 [2024-07-24 19:50:36.969138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:19.857 [2024-07-24 19:50:36.969144] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.969150] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a3540): datao=0, datal=4096, cccid=4 00:21:19.857 [2024-07-24 19:50:36.969158] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14039c0) on tqpair(0x13a3540): expected_datao=0, payload_size=4096 00:21:19.857 [2024-07-24 19:50:36.969165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.969182] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.969191] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.969222] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.857 [2024-07-24 19:50:36.969233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.857 [2024-07-24 19:50:36.969239] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.973259] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14039c0) on tqpair=0x13a3540 00:21:19.857 [2024-07-24 19:50:36.973277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:19.857 [2024-07-24 19:50:36.973294] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:19.857 [2024-07-24 19:50:36.973324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:19.857 [2024-07-24 19:50:36.973337] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:19.857 [2024-07-24 19:50:36.973346] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:19.857 [2024-07-24 19:50:36.973355] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:19.857 [2024-07-24 19:50:36.973363] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:19.857 [2024-07-24 19:50:36.973370] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:19.857 [2024-07-24 19:50:36.973379] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:19.857 [2024-07-24 19:50:36.973397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.973406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a3540) 00:21:19.857 [2024-07-24 19:50:36.973417] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-07-24 19:50:36.973428] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.973435] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.973441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13a3540) 00:21:19.857 [2024-07-24 19:50:36.973450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:19.857 [2024-07-24 19:50:36.973476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14039c0, cid 4, qid 0 00:21:19.857 [2024-07-24 19:50:36.973488] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b40, cid 5, qid 0 00:21:19.857 [2024-07-24 19:50:36.973609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.857 [2024-07-24 19:50:36.973624] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.857 [2024-07-24 19:50:36.973631] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.973638] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14039c0) on tqpair=0x13a3540 00:21:19.857 [2024-07-24 19:50:36.973648] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.857 [2024-07-24 19:50:36.973657] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.857 [2024-07-24 19:50:36.973663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.973670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403b40) on tqpair=0x13a3540 00:21:19.857 [2024-07-24 19:50:36.973685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.973695] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13a3540) 00:21:19.857 [2024-07-24 19:50:36.973705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-07-24 19:50:36.973727] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b40, cid 5, qid 0 00:21:19.857 [2024-07-24 19:50:36.973832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.857 [2024-07-24 19:50:36.973844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.857 [2024-07-24 19:50:36.973854] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.973862] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403b40) on tqpair=0x13a3540 00:21:19.857 [2024-07-24 19:50:36.973877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.973886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13a3540) 00:21:19.857 [2024-07-24 19:50:36.973896] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-07-24 19:50:36.973917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b40, cid 5, qid 0 00:21:19.857 [2024-07-24 19:50:36.974062] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.857 [2024-07-24 19:50:36.974075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.857 [2024-07-24 19:50:36.974081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.974088] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403b40) on tqpair=0x13a3540 00:21:19.857 [2024-07-24 19:50:36.974103] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.974112] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13a3540) 00:21:19.857 [2024-07-24 19:50:36.974122] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.857 [2024-07-24 19:50:36.974142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b40, cid 5, qid 0 00:21:19.857 [2024-07-24 19:50:36.974273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.857 [2024-07-24 19:50:36.974289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.857 [2024-07-24 19:50:36.974296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.974302] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403b40) on tqpair=0x13a3540 00:21:19.857 [2024-07-24 19:50:36.974326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.857 [2024-07-24 19:50:36.974337] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13a3540) 00:21:19.858 [2024-07-24 19:50:36.974348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-07-24 19:50:36.974360] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974367] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13a3540) 00:21:19.858 [2024-07-24 19:50:36.974377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-07-24 19:50:36.974388] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974395] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x13a3540) 00:21:19.858 [2024-07-24 19:50:36.974404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-07-24 19:50:36.974416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974424] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13a3540) 00:21:19.858 [2024-07-24 19:50:36.974433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.858 [2024-07-24 19:50:36.974455] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403b40, cid 5, qid 0 00:21:19.858 [2024-07-24 19:50:36.974466] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14039c0, cid 4, qid 0 00:21:19.858 [2024-07-24 19:50:36.974474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403cc0, cid 6, qid 0 00:21:19.858 [2024-07-24 19:50:36.974485] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403e40, cid 7, qid 0 00:21:19.858 [2024-07-24 19:50:36.974692] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:19.858 [2024-07-24 19:50:36.974707] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:19.858 [2024-07-24 19:50:36.974714] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974721] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a3540): datao=0, datal=8192, cccid=5 00:21:19.858 [2024-07-24 19:50:36.974728] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1403b40) on tqpair(0x13a3540): expected_datao=0, payload_size=8192 00:21:19.858 [2024-07-24 19:50:36.974736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974746] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974754] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:19.858 [2024-07-24 19:50:36.974771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:19.858 [2024-07-24 19:50:36.974778] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974784] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a3540): datao=0, datal=512, cccid=4 00:21:19.858 [2024-07-24 19:50:36.974792] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14039c0) on tqpair(0x13a3540): expected_datao=0, payload_size=512 00:21:19.858 [2024-07-24 19:50:36.974799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974808] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974815] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974823] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:19.858 [2024-07-24 19:50:36.974832] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:19.858 [2024-07-24 19:50:36.974838] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974844] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a3540): datao=0, datal=512, cccid=6 00:21:19.858 [2024-07-24 19:50:36.974852] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1403cc0) on tqpair(0x13a3540): expected_datao=0, payload_size=512 00:21:19.858 [2024-07-24 19:50:36.974859] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974868] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974875] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974884] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:19.858 [2024-07-24 19:50:36.974892] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:19.858 [2024-07-24 19:50:36.974899] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974905] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13a3540): datao=0, datal=4096, cccid=7 00:21:19.858 [2024-07-24 19:50:36.974912] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1403e40) on tqpair(0x13a3540): expected_datao=0, payload_size=4096 00:21:19.858 [2024-07-24 19:50:36.974920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974929] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974937] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.858 [2024-07-24 19:50:36.974958] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.858 [2024-07-24 19:50:36.974964] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.974986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403b40) on tqpair=0x13a3540 00:21:19.858 [2024-07-24 19:50:36.975004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.858 [2024-07-24 19:50:36.975018] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.858 [2024-07-24 19:50:36.975025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.975031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14039c0) on tqpair=0x13a3540 00:21:19.858 [2024-07-24 19:50:36.975046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.858 [2024-07-24 19:50:36.975056] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.858 [2024-07-24 19:50:36.975062] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.975069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403cc0) on tqpair=0x13a3540 00:21:19.858 [2024-07-24 19:50:36.975079] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.858 [2024-07-24 19:50:36.975088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.858 [2024-07-24 19:50:36.975094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.858 [2024-07-24 19:50:36.975100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403e40) on tqpair=0x13a3540 00:21:19.858 ===================================================== 00:21:19.858 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.858 ===================================================== 00:21:19.858 Controller Capabilities/Features 00:21:19.858 ================================ 00:21:19.858 Vendor ID: 8086 00:21:19.858 Subsystem Vendor ID: 8086 00:21:19.858 Serial Number: SPDK00000000000001 00:21:19.858 Model Number: SPDK bdev Controller 00:21:19.858 Firmware Version: 24.09 00:21:19.858 Recommended Arb Burst: 6 00:21:19.858 IEEE OUI Identifier: e4 d2 5c 00:21:19.858 Multi-path I/O 00:21:19.858 May have multiple subsystem ports: Yes 00:21:19.858 May have multiple controllers: Yes 00:21:19.858 Associated with SR-IOV VF: No 00:21:19.858 Max Data Transfer Size: 131072 00:21:19.858 Max Number of Namespaces: 32 00:21:19.858 Max Number of I/O Queues: 127 00:21:19.858 NVMe Specification Version (VS): 1.3 00:21:19.858 NVMe Specification Version (Identify): 1.3 00:21:19.858 Maximum Queue Entries: 128 00:21:19.858 Contiguous Queues Required: Yes 00:21:19.858 Arbitration Mechanisms Supported 00:21:19.858 Weighted Round Robin: Not Supported 00:21:19.858 Vendor Specific: Not Supported 00:21:19.858 Reset Timeout: 15000 ms 00:21:19.858 Doorbell Stride: 4 bytes 00:21:19.858 NVM Subsystem Reset: Not Supported 00:21:19.858 Command Sets Supported 00:21:19.858 NVM Command Set: Supported 00:21:19.858 Boot Partition: Not Supported 00:21:19.858 Memory Page Size Minimum: 4096 bytes 00:21:19.858 Memory Page Size Maximum: 4096 bytes 00:21:19.858 Persistent Memory Region: Not Supported 00:21:19.858 Optional Asynchronous Events Supported 00:21:19.858 Namespace Attribute Notices: Supported 00:21:19.858 Firmware Activation Notices: Not Supported 00:21:19.858 ANA Change Notices: Not Supported 00:21:19.858 PLE Aggregate Log Change Notices: Not Supported 00:21:19.858 LBA Status Info Alert Notices: Not Supported 00:21:19.858 EGE Aggregate Log Change Notices: Not Supported 00:21:19.858 Normal NVM Subsystem Shutdown event: Not Supported 00:21:19.858 Zone Descriptor Change Notices: Not Supported 00:21:19.858 Discovery Log Change Notices: Not Supported 00:21:19.858 Controller Attributes 00:21:19.858 128-bit Host Identifier: Supported 00:21:19.858 Non-Operational Permissive Mode: Not Supported 00:21:19.858 NVM Sets: Not Supported 00:21:19.858 Read Recovery Levels: Not Supported 00:21:19.858 Endurance Groups: Not Supported 00:21:19.858 Predictable Latency Mode: Not Supported 00:21:19.858 Traffic Based Keep ALive: Not Supported 00:21:19.858 Namespace Granularity: Not Supported 00:21:19.858 SQ Associations: Not Supported 00:21:19.858 UUID List: Not Supported 00:21:19.858 Multi-Domain Subsystem: Not Supported 00:21:19.858 Fixed Capacity Management: Not Supported 00:21:19.858 Variable Capacity Management: Not Supported 00:21:19.858 Delete Endurance Group: Not Supported 00:21:19.858 Delete NVM Set: Not Supported 00:21:19.858 Extended LBA Formats Supported: Not Supported 00:21:19.858 Flexible Data Placement Supported: Not Supported 00:21:19.858 00:21:19.859 Controller Memory Buffer Support 00:21:19.859 ================================ 00:21:19.859 Supported: No 00:21:19.859 00:21:19.859 Persistent Memory Region Support 00:21:19.859 ================================ 00:21:19.859 Supported: No 00:21:19.859 00:21:19.859 Admin Command Set Attributes 00:21:19.859 ============================ 00:21:19.859 Security Send/Receive: Not Supported 00:21:19.859 Format NVM: Not Supported 00:21:19.859 Firmware Activate/Download: Not Supported 00:21:19.859 Namespace Management: Not Supported 00:21:19.859 Device Self-Test: Not Supported 00:21:19.859 Directives: Not Supported 00:21:19.859 NVMe-MI: Not Supported 00:21:19.859 Virtualization Management: Not Supported 00:21:19.859 Doorbell Buffer Config: Not Supported 00:21:19.859 Get LBA Status Capability: Not Supported 00:21:19.859 Command & Feature Lockdown Capability: Not Supported 00:21:19.859 Abort Command Limit: 4 00:21:19.859 Async Event Request Limit: 4 00:21:19.859 Number of Firmware Slots: N/A 00:21:19.859 Firmware Slot 1 Read-Only: N/A 00:21:19.859 Firmware Activation Without Reset: N/A 00:21:19.859 Multiple Update Detection Support: N/A 00:21:19.859 Firmware Update Granularity: No Information Provided 00:21:19.859 Per-Namespace SMART Log: No 00:21:19.859 Asymmetric Namespace Access Log Page: Not Supported 00:21:19.859 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:19.859 Command Effects Log Page: Supported 00:21:19.859 Get Log Page Extended Data: Supported 00:21:19.859 Telemetry Log Pages: Not Supported 00:21:19.859 Persistent Event Log Pages: Not Supported 00:21:19.859 Supported Log Pages Log Page: May Support 00:21:19.859 Commands Supported & Effects Log Page: Not Supported 00:21:19.859 Feature Identifiers & Effects Log Page:May Support 00:21:19.859 NVMe-MI Commands & Effects Log Page: May Support 00:21:19.859 Data Area 4 for Telemetry Log: Not Supported 00:21:19.859 Error Log Page Entries Supported: 128 00:21:19.859 Keep Alive: Supported 00:21:19.859 Keep Alive Granularity: 10000 ms 00:21:19.859 00:21:19.859 NVM Command Set Attributes 00:21:19.859 ========================== 00:21:19.859 Submission Queue Entry Size 00:21:19.859 Max: 64 00:21:19.859 Min: 64 00:21:19.859 Completion Queue Entry Size 00:21:19.859 Max: 16 00:21:19.859 Min: 16 00:21:19.859 Number of Namespaces: 32 00:21:19.859 Compare Command: Supported 00:21:19.859 Write Uncorrectable Command: Not Supported 00:21:19.859 Dataset Management Command: Supported 00:21:19.859 Write Zeroes Command: Supported 00:21:19.859 Set Features Save Field: Not Supported 00:21:19.859 Reservations: Supported 00:21:19.859 Timestamp: Not Supported 00:21:19.859 Copy: Supported 00:21:19.859 Volatile Write Cache: Present 00:21:19.859 Atomic Write Unit (Normal): 1 00:21:19.859 Atomic Write Unit (PFail): 1 00:21:19.859 Atomic Compare & Write Unit: 1 00:21:19.859 Fused Compare & Write: Supported 00:21:19.859 Scatter-Gather List 00:21:19.859 SGL Command Set: Supported 00:21:19.859 SGL Keyed: Supported 00:21:19.859 SGL Bit Bucket Descriptor: Not Supported 00:21:19.859 SGL Metadata Pointer: Not Supported 00:21:19.859 Oversized SGL: Not Supported 00:21:19.859 SGL Metadata Address: Not Supported 00:21:19.859 SGL Offset: Supported 00:21:19.859 Transport SGL Data Block: Not Supported 00:21:19.859 Replay Protected Memory Block: Not Supported 00:21:19.859 00:21:19.859 Firmware Slot Information 00:21:19.859 ========================= 00:21:19.859 Active slot: 1 00:21:19.859 Slot 1 Firmware Revision: 24.09 00:21:19.859 00:21:19.859 00:21:19.859 Commands Supported and Effects 00:21:19.859 ============================== 00:21:19.859 Admin Commands 00:21:19.859 -------------- 00:21:19.859 Get Log Page (02h): Supported 00:21:19.859 Identify (06h): Supported 00:21:19.859 Abort (08h): Supported 00:21:19.859 Set Features (09h): Supported 00:21:19.859 Get Features (0Ah): Supported 00:21:19.859 Asynchronous Event Request (0Ch): Supported 00:21:19.859 Keep Alive (18h): Supported 00:21:19.859 I/O Commands 00:21:19.859 ------------ 00:21:19.859 Flush (00h): Supported LBA-Change 00:21:19.859 Write (01h): Supported LBA-Change 00:21:19.859 Read (02h): Supported 00:21:19.859 Compare (05h): Supported 00:21:19.859 Write Zeroes (08h): Supported LBA-Change 00:21:19.859 Dataset Management (09h): Supported LBA-Change 00:21:19.859 Copy (19h): Supported LBA-Change 00:21:19.859 00:21:19.859 Error Log 00:21:19.859 ========= 00:21:19.859 00:21:19.859 Arbitration 00:21:19.859 =========== 00:21:19.859 Arbitration Burst: 1 00:21:19.859 00:21:19.859 Power Management 00:21:19.859 ================ 00:21:19.859 Number of Power States: 1 00:21:19.859 Current Power State: Power State #0 00:21:19.859 Power State #0: 00:21:19.859 Max Power: 0.00 W 00:21:19.859 Non-Operational State: Operational 00:21:19.859 Entry Latency: Not Reported 00:21:19.859 Exit Latency: Not Reported 00:21:19.859 Relative Read Throughput: 0 00:21:19.859 Relative Read Latency: 0 00:21:19.859 Relative Write Throughput: 0 00:21:19.859 Relative Write Latency: 0 00:21:19.859 Idle Power: Not Reported 00:21:19.859 Active Power: Not Reported 00:21:19.859 Non-Operational Permissive Mode: Not Supported 00:21:19.859 00:21:19.859 Health Information 00:21:19.859 ================== 00:21:19.859 Critical Warnings: 00:21:19.859 Available Spare Space: OK 00:21:19.859 Temperature: OK 00:21:19.859 Device Reliability: OK 00:21:19.859 Read Only: No 00:21:19.859 Volatile Memory Backup: OK 00:21:19.859 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:19.859 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:19.859 Available Spare: 0% 00:21:19.859 Available Spare Threshold: 0% 00:21:19.859 Life Percentage Used:[2024-07-24 19:50:36.975211] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.859 [2024-07-24 19:50:36.975237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13a3540) 00:21:19.859 [2024-07-24 19:50:36.975261] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-07-24 19:50:36.975285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403e40, cid 7, qid 0 00:21:19.859 [2024-07-24 19:50:36.975437] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.859 [2024-07-24 19:50:36.975452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.859 [2024-07-24 19:50:36.975459] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.859 [2024-07-24 19:50:36.975465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403e40) on tqpair=0x13a3540 00:21:19.859 [2024-07-24 19:50:36.975510] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:19.859 [2024-07-24 19:50:36.975539] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14033c0) on tqpair=0x13a3540 00:21:19.859 [2024-07-24 19:50:36.975550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-07-24 19:50:36.975559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403540) on tqpair=0x13a3540 00:21:19.859 [2024-07-24 19:50:36.975566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-07-24 19:50:36.975574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14036c0) on tqpair=0x13a3540 00:21:19.859 [2024-07-24 19:50:36.975581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-07-24 19:50:36.975589] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403840) on tqpair=0x13a3540 00:21:19.859 [2024-07-24 19:50:36.975596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:19.859 [2024-07-24 19:50:36.975609] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.859 [2024-07-24 19:50:36.975617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.859 [2024-07-24 19:50:36.975624] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a3540) 00:21:19.859 [2024-07-24 19:50:36.975634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-07-24 19:50:36.975656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403840, cid 3, qid 0 00:21:19.859 [2024-07-24 19:50:36.975750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.859 [2024-07-24 19:50:36.975763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.859 [2024-07-24 19:50:36.975773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.859 [2024-07-24 19:50:36.975780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403840) on tqpair=0x13a3540 00:21:19.859 [2024-07-24 19:50:36.975791] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.859 [2024-07-24 19:50:36.975798] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.859 [2024-07-24 19:50:36.975805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a3540) 00:21:19.859 [2024-07-24 19:50:36.975815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.859 [2024-07-24 19:50:36.975841] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403840, cid 3, qid 0 00:21:19.859 [2024-07-24 19:50:36.975967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.860 [2024-07-24 19:50:36.975982] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.860 [2024-07-24 19:50:36.975989] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.975996] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403840) on tqpair=0x13a3540 00:21:19.860 [2024-07-24 19:50:36.976003] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:19.860 [2024-07-24 19:50:36.976011] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:19.860 [2024-07-24 19:50:36.976027] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.976036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.976042] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a3540) 00:21:19.860 [2024-07-24 19:50:36.976053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-07-24 19:50:36.976073] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403840, cid 3, qid 0 00:21:19.860 [2024-07-24 19:50:36.976170] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.860 [2024-07-24 19:50:36.976185] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.860 [2024-07-24 19:50:36.976192] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.976198] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403840) on tqpair=0x13a3540 00:21:19.860 [2024-07-24 19:50:36.976215] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.976224] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.976230] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a3540) 00:21:19.860 [2024-07-24 19:50:36.976250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-07-24 19:50:36.976273] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403840, cid 3, qid 0 00:21:19.860 [2024-07-24 19:50:36.976368] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.860 [2024-07-24 19:50:36.976381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.860 [2024-07-24 19:50:36.976387] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.976394] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403840) on tqpair=0x13a3540 00:21:19.860 [2024-07-24 19:50:36.976410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.976419] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.976425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a3540) 00:21:19.860 [2024-07-24 19:50:36.976436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-07-24 19:50:36.976456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403840, cid 3, qid 0 00:21:19.860 [2024-07-24 19:50:36.976554] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.860 [2024-07-24 19:50:36.976569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.860 [2024-07-24 19:50:36.976576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.976582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403840) on tqpair=0x13a3540 00:21:19.860 [2024-07-24 19:50:36.976598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.976608] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.976614] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a3540) 00:21:19.860 [2024-07-24 19:50:36.976625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-07-24 19:50:36.976645] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403840, cid 3, qid 0 00:21:19.860 [2024-07-24 19:50:36.976805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.860 [2024-07-24 19:50:36.976817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.860 [2024-07-24 19:50:36.976823] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.976830] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403840) on tqpair=0x13a3540 00:21:19.860 [2024-07-24 19:50:36.976846] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.976855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.976861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a3540) 00:21:19.860 [2024-07-24 19:50:36.976871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-07-24 19:50:36.976891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403840, cid 3, qid 0 00:21:19.860 [2024-07-24 19:50:36.977034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.860 [2024-07-24 19:50:36.977046] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.860 [2024-07-24 19:50:36.977053] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.977060] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403840) on tqpair=0x13a3540 00:21:19.860 [2024-07-24 19:50:36.977075] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.977084] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.977091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a3540) 00:21:19.860 [2024-07-24 19:50:36.977101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-07-24 19:50:36.977121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403840, cid 3, qid 0 00:21:19.860 [2024-07-24 19:50:36.981258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.860 [2024-07-24 19:50:36.981274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.860 [2024-07-24 19:50:36.981281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.981288] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403840) on tqpair=0x13a3540 00:21:19.860 [2024-07-24 19:50:36.981304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.981330] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.981336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13a3540) 00:21:19.860 [2024-07-24 19:50:36.981347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:19.860 [2024-07-24 19:50:36.981370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1403840, cid 3, qid 0 00:21:19.860 [2024-07-24 19:50:36.981478] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:19.860 [2024-07-24 19:50:36.981497] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:19.860 [2024-07-24 19:50:36.981505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:19.860 [2024-07-24 19:50:36.981511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1403840) on tqpair=0x13a3540 00:21:19.860 [2024-07-24 19:50:36.981524] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:21:19.860 0% 00:21:19.860 Data Units Read: 0 00:21:19.860 Data Units Written: 0 00:21:19.860 Host Read Commands: 0 00:21:19.860 Host Write Commands: 0 00:21:19.860 Controller Busy Time: 0 minutes 00:21:19.860 Power Cycles: 0 00:21:19.860 Power On Hours: 0 hours 00:21:19.860 Unsafe Shutdowns: 0 00:21:19.860 Unrecoverable Media Errors: 0 00:21:19.860 Lifetime Error Log Entries: 0 00:21:19.860 Warning Temperature Time: 0 minutes 00:21:19.860 Critical Temperature Time: 0 minutes 00:21:19.861 00:21:19.861 Number of Queues 00:21:19.861 ================ 00:21:19.861 Number of I/O Submission Queues: 127 00:21:19.861 Number of I/O Completion Queues: 127 00:21:19.861 00:21:19.861 Active Namespaces 00:21:19.861 ================= 00:21:19.861 Namespace ID:1 00:21:19.861 Error Recovery Timeout: Unlimited 00:21:19.861 Command Set Identifier: NVM (00h) 00:21:19.861 Deallocate: Supported 00:21:19.861 Deallocated/Unwritten Error: Not Supported 00:21:19.861 Deallocated Read Value: Unknown 00:21:19.861 Deallocate in Write Zeroes: Not Supported 00:21:19.861 Deallocated Guard Field: 0xFFFF 00:21:19.861 Flush: Supported 00:21:19.861 Reservation: Supported 00:21:19.861 Namespace Sharing Capabilities: Multiple Controllers 00:21:19.861 Size (in LBAs): 131072 (0GiB) 00:21:19.861 Capacity (in LBAs): 131072 (0GiB) 00:21:19.861 Utilization (in LBAs): 131072 (0GiB) 00:21:19.861 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:19.861 EUI64: ABCDEF0123456789 00:21:19.861 UUID: 6df58504-d9a7-4e13-8628-278da1f0f2c2 00:21:19.861 Thin Provisioning: Not Supported 00:21:19.861 Per-NS Atomic Units: Yes 00:21:19.861 Atomic Boundary Size (Normal): 0 00:21:19.861 Atomic Boundary Size (PFail): 0 00:21:19.861 Atomic Boundary Offset: 0 00:21:19.861 Maximum Single Source Range Length: 65535 00:21:19.861 Maximum Copy Length: 65535 00:21:19.861 Maximum Source Range Count: 1 00:21:19.861 NGUID/EUI64 Never Reused: No 00:21:19.861 Namespace Write Protected: No 00:21:19.861 Number of LBA Formats: 1 00:21:19.861 Current LBA Format: LBA Format #00 00:21:19.861 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:19.861 00:21:19.861 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:19.861 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:19.861 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@562 -- # xtrace_disable 00:21:19.861 19:50:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # nvmfcleanup 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:19.861 rmmod nvme_tcp 00:21:19.861 rmmod nvme_fabrics 00:21:19.861 rmmod nvme_keyring 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # '[' -n 1233075 ']' 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # killprocess 1233075 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@951 -- # '[' -z 1233075 ']' 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # kill -0 1233075 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # uname 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1233075 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1233075' 00:21:19.861 killing process with pid 1233075 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # kill 1233075 00:21:19.861 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@975 -- # wait 1233075 00:21:20.119 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:21:20.119 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:21:20.119 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:21:20.119 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:20.119 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@282 -- # remove_spdk_ns 00:21:20.119 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.119 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.119 19:50:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.648 19:50:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:21:22.648 00:21:22.648 real 0m5.334s 00:21:22.648 user 0m4.041s 00:21:22.648 sys 0m1.843s 00:21:22.648 19:50:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # xtrace_disable 00:21:22.648 19:50:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:22.648 ************************************ 00:21:22.648 END TEST nvmf_identify 00:21:22.648 ************************************ 00:21:22.648 19:50:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:22.648 19:50:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.649 ************************************ 00:21:22.649 START TEST nvmf_perf 00:21:22.649 ************************************ 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:22.649 * Looking for test storage... 00:21:22.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:22.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@452 -- # prepare_net_devs 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # local -g is_hw=no 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # remove_spdk_ns 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # xtrace_disable 00:21:22.649 19:50:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # pci_devs=() 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -a pci_devs 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # pci_net_devs=() 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # pci_drivers=() 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -A pci_drivers 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # net_devs=() 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # local -ga net_devs 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # e810=() 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # local -ga e810 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # x722=() 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # local -ga x722 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # mlx=() 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # local -ga mlx 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:24.053 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:24.053 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # [[ up == up ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:24.053 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # [[ up == up ]] 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.053 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:24.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # is_hw=yes 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.054 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:21:24.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:21:24.312 00:21:24.312 --- 10.0.0.2 ping statistics --- 00:21:24.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.312 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:21:24.312 00:21:24.312 --- 10.0.0.1 ping statistics --- 00:21:24.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.312 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # return 0 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@725 -- # xtrace_disable 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@485 -- # nvmfpid=1235106 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@486 -- # waitforlisten 1235106 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@832 -- # '[' -z 1235106 ']' 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local max_retries=100 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@841 -- # xtrace_disable 00:21:24.312 19:50:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:24.312 [2024-07-24 19:50:41.606314] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:21:24.312 [2024-07-24 19:50:41.606391] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.312 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.312 [2024-07-24 19:50:41.674742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.570 [2024-07-24 19:50:41.791911] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.570 [2024-07-24 19:50:41.791984] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.570 [2024-07-24 19:50:41.792010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.570 [2024-07-24 19:50:41.792030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.570 [2024-07-24 19:50:41.792050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.570 [2024-07-24 19:50:41.792132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.570 [2024-07-24 19:50:41.792194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.570 [2024-07-24 19:50:41.792327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.570 [2024-07-24 19:50:41.792319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:25.503 19:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:21:25.503 19:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@865 -- # return 0 00:21:25.503 19:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:21:25.503 19:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@731 -- # xtrace_disable 00:21:25.503 19:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:25.503 19:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.503 19:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:25.503 19:50:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:28.782 19:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:28.782 19:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:28.782 19:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:21:28.782 19:50:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:29.039 19:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:29.039 19:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:21:29.039 19:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:29.039 19:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:29.039 19:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:29.039 [2024-07-24 19:50:46.393381] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.039 19:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:29.297 19:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:29.297 19:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:29.555 19:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:29.555 19:50:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:29.813 19:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:30.070 [2024-07-24 19:50:47.389142] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.070 19:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:30.326 19:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:21:30.326 19:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:30.326 19:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:30.326 19:50:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:31.698 Initializing NVMe Controllers 00:21:31.698 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:21:31.698 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:21:31.698 Initialization complete. Launching workers. 00:21:31.698 ======================================================== 00:21:31.698 Latency(us) 00:21:31.698 Device Information : IOPS MiB/s Average min max 00:21:31.698 PCIE (0000:88:00.0) NSID 1 from core 0: 85070.60 332.31 375.69 47.46 7427.94 00:21:31.698 ======================================================== 00:21:31.698 Total : 85070.60 332.31 375.69 47.46 7427.94 00:21:31.698 00:21:31.698 19:50:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:31.698 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.072 Initializing NVMe Controllers 00:21:33.072 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:33.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:33.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:33.072 Initialization complete. Launching workers. 00:21:33.072 ======================================================== 00:21:33.072 Latency(us) 00:21:33.072 Device Information : IOPS MiB/s Average min max 00:21:33.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 93.00 0.36 11091.95 159.50 45802.46 00:21:33.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 82.00 0.32 12347.00 6991.54 47956.29 00:21:33.072 ======================================================== 00:21:33.072 Total : 175.00 0.68 11680.03 159.50 47956.29 00:21:33.072 00:21:33.072 19:50:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:33.072 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.445 Initializing NVMe Controllers 00:21:34.445 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:34.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:34.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:34.445 Initialization complete. Launching workers. 00:21:34.445 ======================================================== 00:21:34.445 Latency(us) 00:21:34.445 Device Information : IOPS MiB/s Average min max 00:21:34.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8512.65 33.25 3759.33 544.08 8145.03 00:21:34.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3904.84 15.25 8238.59 6279.12 15716.87 00:21:34.445 ======================================================== 00:21:34.445 Total : 12417.49 48.51 5167.89 544.08 15716.87 00:21:34.445 00:21:34.445 19:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:34.445 19:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:34.445 19:50:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:34.445 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.973 Initializing NVMe Controllers 00:21:36.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:36.973 Controller IO queue size 128, less than required. 00:21:36.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:36.973 Controller IO queue size 128, less than required. 00:21:36.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:36.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:36.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:36.973 Initialization complete. Launching workers. 00:21:36.973 ======================================================== 00:21:36.973 Latency(us) 00:21:36.973 Device Information : IOPS MiB/s Average min max 00:21:36.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1520.07 380.02 85907.23 52586.69 119045.15 00:21:36.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 587.83 146.96 222742.01 86057.40 316901.38 00:21:36.973 ======================================================== 00:21:36.973 Total : 2107.90 526.97 124066.51 52586.69 316901.38 00:21:36.973 00:21:36.973 19:50:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:36.973 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.973 No valid NVMe controllers or AIO or URING devices found 00:21:36.973 Initializing NVMe Controllers 00:21:36.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:36.973 Controller IO queue size 128, less than required. 00:21:36.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:36.973 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:36.973 Controller IO queue size 128, less than required. 00:21:36.973 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:36.973 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:36.973 WARNING: Some requested NVMe devices were skipped 00:21:36.973 19:50:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:36.973 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.501 Initializing NVMe Controllers 00:21:39.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:39.501 Controller IO queue size 128, less than required. 00:21:39.501 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:39.501 Controller IO queue size 128, less than required. 00:21:39.501 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:39.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:39.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:39.501 Initialization complete. Launching workers. 00:21:39.501 00:21:39.501 ==================== 00:21:39.501 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:39.501 TCP transport: 00:21:39.501 polls: 24698 00:21:39.501 idle_polls: 8215 00:21:39.501 sock_completions: 16483 00:21:39.501 nvme_completions: 4541 00:21:39.501 submitted_requests: 6808 00:21:39.501 queued_requests: 1 00:21:39.501 00:21:39.501 ==================== 00:21:39.501 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:39.501 TCP transport: 00:21:39.501 polls: 28018 00:21:39.501 idle_polls: 11292 00:21:39.501 sock_completions: 16726 00:21:39.501 nvme_completions: 4487 00:21:39.501 submitted_requests: 6716 00:21:39.501 queued_requests: 1 00:21:39.501 ======================================================== 00:21:39.501 Latency(us) 00:21:39.501 Device Information : IOPS MiB/s Average min max 00:21:39.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1134.30 283.57 116130.25 73417.57 178122.07 00:21:39.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1120.81 280.20 116367.77 53915.00 166472.13 00:21:39.501 ======================================================== 00:21:39.501 Total : 2255.10 563.78 116248.30 53915.00 178122.07 00:21:39.501 00:21:39.501 19:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:39.501 19:50:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # nvmfcleanup 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:39.758 rmmod nvme_tcp 00:21:39.758 rmmod nvme_fabrics 00:21:39.758 rmmod nvme_keyring 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # '[' -n 1235106 ']' 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # killprocess 1235106 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@951 -- # '[' -z 1235106 ']' 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # kill -0 1235106 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # uname 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:21:39.758 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1235106 00:21:40.016 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:21:40.016 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:21:40.016 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1235106' 00:21:40.016 killing process with pid 1235106 00:21:40.016 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # kill 1235106 00:21:40.016 19:50:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@975 -- # wait 1235106 00:21:41.913 19:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:21:41.913 19:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:21:41.913 19:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:21:41.913 19:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:41.913 19:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@282 -- # remove_spdk_ns 00:21:41.913 19:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:41.913 19:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:41.913 19:50:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.849 19:51:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:21:43.849 00:21:43.849 real 0m21.401s 00:21:43.849 user 1m5.811s 00:21:43.849 sys 0m5.315s 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # xtrace_disable 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:43.850 ************************************ 00:21:43.850 END TEST nvmf_perf 00:21:43.850 ************************************ 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:43.850 ************************************ 00:21:43.850 START TEST nvmf_fio_host 00:21:43.850 ************************************ 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:43.850 * Looking for test storage... 00:21:43.850 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:43.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@452 -- # prepare_net_devs 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # local -g is_hw=no 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # remove_spdk_ns 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # xtrace_disable 00:21:43.850 19:51:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # pci_devs=() 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -a pci_devs 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # pci_net_devs=() 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # pci_drivers=() 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -A pci_drivers 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # net_devs=() 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # local -ga net_devs 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # e810=() 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # local -ga e810 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # x722=() 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # local -ga x722 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # mlx=() 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # local -ga mlx 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:45.750 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:45.750 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # [[ up == up ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:45.750 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # [[ up == up ]] 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:45.750 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.750 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:21:45.751 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # is_hw=yes 00:21:45.751 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:21:45.751 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:21:45.751 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:21:45.751 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.751 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.751 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.751 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:21:45.751 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.751 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.751 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:21:45.751 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.751 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.751 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:21:45.751 19:51:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:21:45.751 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.751 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.751 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.751 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.751 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:21:45.751 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.751 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.751 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.009 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:21:46.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:21:46.009 00:21:46.009 --- 10.0.0.2 ping statistics --- 00:21:46.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.009 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:21:46.009 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:21:46.009 00:21:46.009 --- 10.0.0.1 ping statistics --- 00:21:46.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.009 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:21:46.009 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.009 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # return 0 00:21:46.009 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:21:46.009 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.009 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:21:46.009 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:21:46.009 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.009 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:21:46.009 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:21:46.009 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:46.009 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:46.010 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@725 -- # xtrace_disable 00:21:46.010 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.010 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1239194 00:21:46.010 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:46.010 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:46.010 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1239194 00:21:46.010 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@832 -- # '[' -z 1239194 ']' 00:21:46.010 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.010 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local max_retries=100 00:21:46.010 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.010 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@841 -- # xtrace_disable 00:21:46.010 19:51:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:46.010 [2024-07-24 19:51:03.221454] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:21:46.010 [2024-07-24 19:51:03.221536] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.010 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.010 [2024-07-24 19:51:03.295212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:46.267 [2024-07-24 19:51:03.409065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.267 [2024-07-24 19:51:03.409125] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.267 [2024-07-24 19:51:03.409139] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.267 [2024-07-24 19:51:03.409150] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.267 [2024-07-24 19:51:03.409159] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.267 [2024-07-24 19:51:03.409235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.267 [2024-07-24 19:51:03.409303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.267 [2024-07-24 19:51:03.409325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:46.267 [2024-07-24 19:51:03.409328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.832 19:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:21:46.832 19:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@865 -- # return 0 00:21:46.832 19:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:47.089 [2024-07-24 19:51:04.452567] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.347 19:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:47.347 19:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@731 -- # xtrace_disable 00:21:47.347 19:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:47.347 19:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:47.604 Malloc1 00:21:47.604 19:51:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:47.861 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:48.119 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.375 [2024-07-24 19:51:05.544689] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.375 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:48.631 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:48.631 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:48.631 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1361 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:48.631 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local fio_dir=/usr/src/fio 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local sanitizers 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # shift 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local asan_lib= 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # grep libasan 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # asan_lib= 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # grep libclang_rt.asan 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # asan_lib= 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1353 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:48.632 19:51:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1353 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:48.889 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:48.889 fio-3.35 00:21:48.889 Starting 1 thread 00:21:48.889 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.415 00:21:51.415 test: (groupid=0, jobs=1): err= 0: pid=1239680: Wed Jul 24 19:51:08 2024 00:21:51.415 read: IOPS=8328, BW=32.5MiB/s (34.1MB/s)(65.3MiB/2007msec) 00:21:51.415 slat (nsec): min=1881, max=156661, avg=2674.91, stdev=2107.13 00:21:51.415 clat (usec): min=2236, max=13474, avg=8417.48, stdev=681.33 00:21:51.415 lat (usec): min=2267, max=13477, avg=8420.16, stdev=681.22 00:21:51.415 clat percentiles (usec): 00:21:51.415 | 1.00th=[ 6849], 5.00th=[ 7308], 10.00th=[ 7570], 20.00th=[ 7898], 00:21:51.415 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 00:21:51.415 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9241], 95.00th=[ 9503], 00:21:51.415 | 99.00th=[ 9896], 99.50th=[10028], 99.90th=[10814], 99.95th=[12911], 00:21:51.415 | 99.99th=[13435] 00:21:51.415 bw ( KiB/s): min=32232, max=34024, per=100.00%, avg=33316.00, stdev=768.57, samples=4 00:21:51.415 iops : min= 8058, max= 8506, avg=8329.00, stdev=192.14, samples=4 00:21:51.415 write: IOPS=8332, BW=32.5MiB/s (34.1MB/s)(65.3MiB/2007msec); 0 zone resets 00:21:51.415 slat (usec): min=2, max=144, avg= 2.83, stdev= 1.76 00:21:51.415 clat (usec): min=1703, max=13355, avg=6883.27, stdev=599.36 00:21:51.415 lat (usec): min=1712, max=13358, avg=6886.10, stdev=599.33 00:21:51.415 clat percentiles (usec): 00:21:51.415 | 1.00th=[ 5604], 5.00th=[ 5997], 10.00th=[ 6194], 20.00th=[ 6456], 00:21:51.415 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7046], 00:21:51.415 | 70.00th=[ 7177], 80.00th=[ 7308], 90.00th=[ 7504], 95.00th=[ 7701], 00:21:51.415 | 99.00th=[ 8160], 99.50th=[ 8291], 99.90th=[12256], 99.95th=[13042], 00:21:51.415 | 99.99th=[13304] 00:21:51.415 bw ( KiB/s): min=32984, max=33552, per=99.94%, avg=33310.00, stdev=270.54, samples=4 00:21:51.415 iops : min= 8246, max= 8388, avg=8327.50, stdev=67.63, samples=4 00:21:51.415 lat (msec) : 2=0.02%, 4=0.11%, 10=99.53%, 20=0.33% 00:21:51.415 cpu : usr=57.38%, sys=39.23%, ctx=68, majf=0, minf=38 00:21:51.415 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:51.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:51.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:51.415 issued rwts: total=16716,16724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:51.415 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:51.415 00:21:51.415 Run status group 0 (all jobs): 00:21:51.415 READ: bw=32.5MiB/s (34.1MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=65.3MiB (68.5MB), run=2007-2007msec 00:21:51.415 WRITE: bw=32.5MiB/s (34.1MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=65.3MiB (68.5MB), run=2007-2007msec 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1361 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local fio_dir=/usr/src/fio 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local sanitizers 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # shift 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local asan_lib= 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # grep libasan 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # asan_lib= 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # grep libclang_rt.asan 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # asan_lib= 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1353 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:51.415 19:51:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1353 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:51.415 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:51.415 fio-3.35 00:21:51.415 Starting 1 thread 00:21:51.415 EAL: No free 2048 kB hugepages reported on node 1 00:21:53.998 00:21:53.998 test: (groupid=0, jobs=1): err= 0: pid=1240502: Wed Jul 24 19:51:10 2024 00:21:53.998 read: IOPS=8412, BW=131MiB/s (138MB/s)(264MiB/2008msec) 00:21:53.998 slat (nsec): min=2895, max=92637, avg=3706.38, stdev=1523.07 00:21:53.998 clat (usec): min=1150, max=16388, avg=8730.30, stdev=1955.99 00:21:53.998 lat (usec): min=1154, max=16392, avg=8734.01, stdev=1956.01 00:21:53.998 clat percentiles (usec): 00:21:53.998 | 1.00th=[ 4752], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 7046], 00:21:53.998 | 30.00th=[ 7570], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9372], 00:21:53.998 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11207], 95.00th=[11863], 00:21:53.998 | 99.00th=[13435], 99.50th=[14222], 99.90th=[15926], 99.95th=[15926], 00:21:53.998 | 99.99th=[16057] 00:21:53.998 bw ( KiB/s): min=62688, max=78592, per=51.95%, avg=69928.00, stdev=7223.38, samples=4 00:21:53.998 iops : min= 3918, max= 4912, avg=4370.50, stdev=451.46, samples=4 00:21:53.998 write: IOPS=4920, BW=76.9MiB/s (80.6MB/s)(143MiB/1862msec); 0 zone resets 00:21:53.998 slat (usec): min=30, max=129, avg=33.60, stdev= 4.63 00:21:53.998 clat (usec): min=6537, max=21381, avg=11231.65, stdev=1912.67 00:21:53.998 lat (usec): min=6568, max=21413, avg=11265.25, stdev=1912.69 00:21:53.998 clat percentiles (usec): 00:21:53.998 | 1.00th=[ 7635], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9503], 00:21:53.998 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11469], 00:21:53.998 | 70.00th=[12125], 80.00th=[12780], 90.00th=[13829], 95.00th=[14746], 00:21:53.998 | 99.00th=[16450], 99.50th=[16712], 99.90th=[17433], 99.95th=[18220], 00:21:53.998 | 99.99th=[21365] 00:21:53.998 bw ( KiB/s): min=65888, max=80992, per=92.51%, avg=72832.00, stdev=6918.07, samples=4 00:21:53.998 iops : min= 4118, max= 5062, avg=4552.00, stdev=432.38, samples=4 00:21:53.998 lat (msec) : 2=0.06%, 4=0.10%, 10=56.81%, 20=43.03%, 50=0.01% 00:21:53.998 cpu : usr=76.69%, sys=20.97%, ctx=32, majf=0, minf=56 00:21:53.998 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:53.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:53.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:53.998 issued rwts: total=16892,9162,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:53.998 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:53.998 00:21:53.998 Run status group 0 (all jobs): 00:21:53.998 READ: bw=131MiB/s (138MB/s), 131MiB/s-131MiB/s (138MB/s-138MB/s), io=264MiB (277MB), run=2008-2008msec 00:21:53.998 WRITE: bw=76.9MiB/s (80.6MB/s), 76.9MiB/s-76.9MiB/s (80.6MB/s-80.6MB/s), io=143MiB (150MB), run=1862-1862msec 00:21:53.998 19:51:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # nvmfcleanup 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:53.998 rmmod nvme_tcp 00:21:53.998 rmmod nvme_fabrics 00:21:53.998 rmmod nvme_keyring 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # '[' -n 1239194 ']' 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # killprocess 1239194 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' -z 1239194 ']' 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # kill -0 1239194 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # uname 00:21:53.998 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:21:53.999 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1239194 00:21:53.999 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:21:53.999 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:21:53.999 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1239194' 00:21:53.999 killing process with pid 1239194 00:21:53.999 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # kill 1239194 00:21:53.999 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@975 -- # wait 1239194 00:21:54.256 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:21:54.256 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:21:54.256 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:21:54.256 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.256 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@282 -- # remove_spdk_ns 00:21:54.256 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.256 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.256 19:51:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:21:56.788 00:21:56.788 real 0m12.736s 00:21:56.788 user 0m37.445s 00:21:56.788 sys 0m4.471s 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # xtrace_disable 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.788 ************************************ 00:21:56.788 END TEST nvmf_fio_host 00:21:56.788 ************************************ 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.788 ************************************ 00:21:56.788 START TEST nvmf_failover 00:21:56.788 ************************************ 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:56.788 * Looking for test storage... 00:21:56.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.788 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:56.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@452 -- # prepare_net_devs 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # local -g is_hw=no 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # remove_spdk_ns 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # xtrace_disable 00:21:56.789 19:51:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # pci_devs=() 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -a pci_devs 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # pci_net_devs=() 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # pci_drivers=() 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -A pci_drivers 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # net_devs=() 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # local -ga net_devs 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # e810=() 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # local -ga e810 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # x722=() 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # local -ga x722 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # mlx=() 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # local -ga mlx 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:58.691 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:58.691 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # [[ up == up ]] 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:58.691 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:58.691 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # [[ up == up ]] 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:58.692 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # is_hw=yes 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:21:58.692 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.692 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:21:58.692 00:21:58.692 --- 10.0.0.2 ping statistics --- 00:21:58.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.692 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:21:58.692 00:21:58.692 --- 10.0.0.1 ping statistics --- 00:21:58.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.692 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # return 0 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@725 -- # xtrace_disable 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@485 -- # nvmfpid=1242741 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@486 -- # waitforlisten 1242741 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@832 -- # '[' -z 1242741 ']' 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local max_retries=100 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@841 -- # xtrace_disable 00:21:58.692 19:51:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:58.692 [2024-07-24 19:51:15.861917] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:21:58.692 [2024-07-24 19:51:15.861989] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.692 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.692 [2024-07-24 19:51:15.929231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:58.692 [2024-07-24 19:51:16.042283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.692 [2024-07-24 19:51:16.042337] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.692 [2024-07-24 19:51:16.042367] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.692 [2024-07-24 19:51:16.042380] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.692 [2024-07-24 19:51:16.042391] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.692 [2024-07-24 19:51:16.042469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.692 [2024-07-24 19:51:16.042502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.692 [2024-07-24 19:51:16.042505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.951 19:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:21:58.951 19:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@865 -- # return 0 00:21:58.951 19:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:21:58.951 19:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@731 -- # xtrace_disable 00:21:58.951 19:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:58.951 19:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.951 19:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:59.209 [2024-07-24 19:51:16.460443] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.209 19:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:59.467 Malloc0 00:21:59.467 19:51:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:59.725 19:51:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:59.982 19:51:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.238 [2024-07-24 19:51:17.562660] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.238 19:51:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:00.496 [2024-07-24 19:51:17.811409] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:00.496 19:51:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:00.785 [2024-07-24 19:51:18.056218] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:00.785 19:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1243051 00:22:00.785 19:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:00.785 19:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:00.785 19:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1243051 /var/tmp/bdevperf.sock 00:22:00.785 19:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@832 -- # '[' -z 1243051 ']' 00:22:00.785 19:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:00.785 19:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local max_retries=100 00:22:00.785 19:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:00.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:00.785 19:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@841 -- # xtrace_disable 00:22:00.785 19:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:01.044 19:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:22:01.044 19:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@865 -- # return 0 00:22:01.044 19:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:01.610 NVMe0n1 00:22:01.610 19:51:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:01.867 00:22:01.867 19:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1243178 00:22:01.867 19:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:01.867 19:51:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:03.242 19:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.242 [2024-07-24 19:51:20.472949] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2286fc0 is same with the state(6) to be set 00:22:03.242 [2024-07-24 19:51:20.473065] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2286fc0 is same with the state(6) to be set 00:22:03.242 [2024-07-24 19:51:20.473081] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2286fc0 is same with the state(6) to be set 00:22:03.242 [2024-07-24 19:51:20.473094] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2286fc0 is same with the state(6) to be set 00:22:03.243 [2024-07-24 19:51:20.473106] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2286fc0 is same with the state(6) to be set 00:22:03.243 [2024-07-24 19:51:20.473118] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2286fc0 is same with the state(6) to be set 00:22:03.243 [2024-07-24 19:51:20.473129] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2286fc0 is same with the state(6) to be set 00:22:03.243 19:51:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:06.539 19:51:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:06.539 00:22:06.539 19:51:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:06.798 [2024-07-24 19:51:24.118713] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.798 [2024-07-24 19:51:24.118794] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.798 [2024-07-24 19:51:24.118810] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.798 [2024-07-24 19:51:24.118823] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.798 [2024-07-24 19:51:24.118835] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.798 [2024-07-24 19:51:24.118846] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.798 [2024-07-24 19:51:24.118858] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.798 [2024-07-24 19:51:24.118870] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.798 [2024-07-24 19:51:24.118882] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.798 [2024-07-24 19:51:24.118894] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.118906] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.118918] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.118929] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.118942] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.118953] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.118965] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.118977] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.118989] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119001] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119012] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119024] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119036] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119048] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119060] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119084] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119096] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119107] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119119] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119130] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119141] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119153] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119165] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119177] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119204] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119217] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119228] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119240] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119276] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119289] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119301] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119312] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119324] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119336] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119347] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119359] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119370] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119382] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119393] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119405] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119417] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119429] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 [2024-07-24 19:51:24.119444] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2287d80 is same with the state(6) to be set 00:22:06.799 19:51:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:10.088 19:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.088 [2024-07-24 19:51:27.371785] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.088 19:51:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:11.024 19:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:11.282 [2024-07-24 19:51:28.628098] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.282 [2024-07-24 19:51:28.628159] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.282 [2024-07-24 19:51:28.628174] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.282 [2024-07-24 19:51:28.628187] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.282 [2024-07-24 19:51:28.628199] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.282 [2024-07-24 19:51:28.628211] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.282 [2024-07-24 19:51:28.628223] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628235] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628257] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628270] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628282] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628294] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628306] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628317] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628329] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628341] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628352] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628364] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628376] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628388] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628400] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628412] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628434] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628447] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628458] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628470] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628481] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628493] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628505] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628517] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628528] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628555] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628566] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628577] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628589] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628600] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628612] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628623] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628634] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628645] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628656] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628667] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628679] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628690] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628702] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628713] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628724] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628735] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628746] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628765] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628777] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628788] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 [2024-07-24 19:51:28.628799] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288b10 is same with the state(6) to be set 00:22:11.283 19:51:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1243178 00:22:17.867 0 00:22:17.868 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1243051 00:22:17.868 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@951 -- # '[' -z 1243051 ']' 00:22:17.868 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # kill -0 1243051 00:22:17.868 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # uname 00:22:17.868 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:22:17.868 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1243051 00:22:17.868 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:22:17.868 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:22:17.868 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1243051' 00:22:17.868 killing process with pid 1243051 00:22:17.868 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # kill 1243051 00:22:17.868 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@975 -- # wait 1243051 00:22:17.868 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:17.868 [2024-07-24 19:51:18.122817] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:22:17.868 [2024-07-24 19:51:18.122908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243051 ] 00:22:17.868 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.868 [2024-07-24 19:51:18.185168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.868 [2024-07-24 19:51:18.294039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.868 Running I/O for 15 seconds... 00:22:17.868 [2024-07-24 19:51:20.473430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.473472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.473979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.473992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.474543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.474586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.474615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.474649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.474677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.474704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.474732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.474760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.474789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.474819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.474847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.474876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.474904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.474932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.474959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.474974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.474990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.475006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.868 [2024-07-24 19:51:20.475019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.475034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.475047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.475062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.475075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.868 [2024-07-24 19:51:20.475090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.868 [2024-07-24 19:51:20.475103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.869 [2024-07-24 19:51:20.475131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.869 [2024-07-24 19:51:20.475159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.869 [2024-07-24 19:51:20.475186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.869 [2024-07-24 19:51:20.475214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.475983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.475999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.869 [2024-07-24 19:51:20.476828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.869 [2024-07-24 19:51:20.476843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.476856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.476871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.476884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.476899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.476916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.476931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.476945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.476960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.476974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.476989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.477002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.477030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.477059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.477088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.477116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.477145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.477173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.477201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.477230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.477272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.477305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.477334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:20.477363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1357c10 is same with the state(6) to be set 00:22:17.870 [2024-07-24 19:51:20.477393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.870 [2024-07-24 19:51:20.477404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.870 [2024-07-24 19:51:20.477416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80760 len:8 PRP1 0x0 PRP2 0x0 00:22:17.870 [2024-07-24 19:51:20.477429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477498] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1357c10 was disconnected and freed. reset controller. 00:22:17.870 [2024-07-24 19:51:20.477515] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:17.870 [2024-07-24 19:51:20.477550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.870 [2024-07-24 19:51:20.477575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.870 [2024-07-24 19:51:20.477625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.870 [2024-07-24 19:51:20.477653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.870 [2024-07-24 19:51:20.477680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:20.477693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:17.870 [2024-07-24 19:51:20.477743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133a0f0 (9): Bad file descriptor 00:22:17.870 [2024-07-24 19:51:20.481836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:17.870 [2024-07-24 19:51:20.639930] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:17.870 [2024-07-24 19:51:24.119851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.119895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.119922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:107560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.119947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.119964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.119977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.119992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.870 [2024-07-24 19:51:24.120341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.870 [2024-07-24 19:51:24.120797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.870 [2024-07-24 19:51:24.120810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.120824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.120837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.120852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.120866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.120881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.120894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.120909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.120922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.120952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.120965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.120980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.120994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:107840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:107928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.871 [2024-07-24 19:51:24.121492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.871 [2024-07-24 19:51:24.121505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.872 [2024-07-24 19:51:24.121534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.121562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.121591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.121620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.121649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.121679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:108032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.121707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.121735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.121763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.121792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.121830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.121860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:108080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.121888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.121918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.121946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.121975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.121990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:108112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.872 [2024-07-24 19:51:24.122952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.122987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.872 [2024-07-24 19:51:24.123005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108376 len:8 PRP1 0x0 PRP2 0x0 00:22:17.872 [2024-07-24 19:51:24.123018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.872 [2024-07-24 19:51:24.123036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.872 [2024-07-24 19:51:24.123048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.872 [2024-07-24 19:51:24.123060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108384 len:8 PRP1 0x0 PRP2 0x0 00:22:17.872 [2024-07-24 19:51:24.123073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108392 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108400 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108408 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108416 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108424 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123339] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108432 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108440 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108448 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108456 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108464 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108472 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108480 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108488 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108496 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108504 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108512 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108520 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108528 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.123963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.123974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.123985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108536 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.123997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.873 [2024-07-24 19:51:24.124010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.873 [2024-07-24 19:51:24.124021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.873 [2024-07-24 19:51:24.124032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108544 len:8 PRP1 0x0 PRP2 0x0 00:22:17.873 [2024-07-24 19:51:24.124044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:24.124056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.874 [2024-07-24 19:51:24.124067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.874 [2024-07-24 19:51:24.124078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108552 len:8 PRP1 0x0 PRP2 0x0 00:22:17.874 [2024-07-24 19:51:24.124093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:24.124106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.874 [2024-07-24 19:51:24.124117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.874 [2024-07-24 19:51:24.124128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108560 len:8 PRP1 0x0 PRP2 0x0 00:22:17.874 [2024-07-24 19:51:24.124140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:24.124153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.874 [2024-07-24 19:51:24.124163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.874 [2024-07-24 19:51:24.124174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108568 len:8 PRP1 0x0 PRP2 0x0 00:22:17.874 [2024-07-24 19:51:24.124186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:24.124273] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1368d40 was disconnected and freed. reset controller. 00:22:17.874 [2024-07-24 19:51:24.124301] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:17.874 [2024-07-24 19:51:24.124336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.874 [2024-07-24 19:51:24.124362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:24.124377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.874 [2024-07-24 19:51:24.124390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:24.124404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.874 [2024-07-24 19:51:24.124417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:24.124430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.874 [2024-07-24 19:51:24.124443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:24.124455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:17.874 [2024-07-24 19:51:24.124497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133a0f0 (9): Bad file descriptor 00:22:17.874 [2024-07-24 19:51:24.128445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:17.874 [2024-07-24 19:51:24.252932] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:17.874 [2024-07-24 19:51:28.629044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.874 [2024-07-24 19:51:28.629085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:28.629113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.874 [2024-07-24 19:51:28.629129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:28.629146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.874 [2024-07-24 19:51:28.629165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:28.629182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.874 [2024-07-24 19:51:28.629211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:28.629226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.874 [2024-07-24 19:51:28.629239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:28.629278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.874 [2024-07-24 19:51:28.629293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:28.629309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.874 [2024-07-24 19:51:28.629322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:28.629337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.874 [2024-07-24 19:51:28.629351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:28.629366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.874 [2024-07-24 19:51:28.629380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:28.629395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.874 [2024-07-24 19:51:28.629409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:28.629424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.874 [2024-07-24 19:51:28.629438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.874 [2024-07-24 19:51:28.629452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:55656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.629978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.629991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:55680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:55720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:17.875 [2024-07-24 19:51:28.630499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.875 [2024-07-24 19:51:28.630527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.875 [2024-07-24 19:51:28.630572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:55848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.875 [2024-07-24 19:51:28.630600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.875 [2024-07-24 19:51:28.630627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:55864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.875 [2024-07-24 19:51:28.630654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.875 [2024-07-24 19:51:28.630668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:55872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.630681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.630698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.630712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.630726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.630739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.630753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:55896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.630767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.630781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.630794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.630809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.630821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.630836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.630849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.630863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:55928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.630876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.630890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.630903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.630918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:55944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.630931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.630945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:55952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.630958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.630972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:55960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.630986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:55976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:55984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:56008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:56048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:56064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:56104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.876 [2024-07-24 19:51:28.631542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.876 [2024-07-24 19:51:28.631555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.631570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.877 [2024-07-24 19:51:28.631584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.631598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.877 [2024-07-24 19:51:28.631612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.631626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.877 [2024-07-24 19:51:28.631640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.631654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.877 [2024-07-24 19:51:28.631668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.631683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.877 [2024-07-24 19:51:28.631696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.631711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.877 [2024-07-24 19:51:28.631724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.631739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.877 [2024-07-24 19:51:28.631752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.631767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.877 [2024-07-24 19:51:28.631783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.631798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.877 [2024-07-24 19:51:28.631812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.631827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.877 [2024-07-24 19:51:28.631840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.631854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.877 [2024-07-24 19:51:28.631867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.631882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:17.877 [2024-07-24 19:51:28.631895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.631933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.877 [2024-07-24 19:51:28.631950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56216 len:8 PRP1 0x0 PRP2 0x0 00:22:17.877 [2024-07-24 19:51:28.631964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.631981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.877 [2024-07-24 19:51:28.631993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.877 [2024-07-24 19:51:28.632005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56224 len:8 PRP1 0x0 PRP2 0x0 00:22:17.877 [2024-07-24 19:51:28.632017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.632030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.877 [2024-07-24 19:51:28.632041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.877 [2024-07-24 19:51:28.632052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56232 len:8 PRP1 0x0 PRP2 0x0 00:22:17.877 [2024-07-24 19:51:28.632064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.632077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.877 [2024-07-24 19:51:28.632088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.877 [2024-07-24 19:51:28.632099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56240 len:8 PRP1 0x0 PRP2 0x0 00:22:17.877 [2024-07-24 19:51:28.632112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.632125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.877 [2024-07-24 19:51:28.632135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.877 [2024-07-24 19:51:28.632147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56248 len:8 PRP1 0x0 PRP2 0x0 00:22:17.877 [2024-07-24 19:51:28.632159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.632172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.877 [2024-07-24 19:51:28.632186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.877 [2024-07-24 19:51:28.632197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56256 len:8 PRP1 0x0 PRP2 0x0 00:22:17.877 [2024-07-24 19:51:28.632210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.632222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.877 [2024-07-24 19:51:28.632233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.877 [2024-07-24 19:51:28.632252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56264 len:8 PRP1 0x0 PRP2 0x0 00:22:17.877 [2024-07-24 19:51:28.632266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.632279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.877 [2024-07-24 19:51:28.632290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.877 [2024-07-24 19:51:28.632301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56272 len:8 PRP1 0x0 PRP2 0x0 00:22:17.877 [2024-07-24 19:51:28.632313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.632326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.877 [2024-07-24 19:51:28.632337] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.877 [2024-07-24 19:51:28.632348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56280 len:8 PRP1 0x0 PRP2 0x0 00:22:17.877 [2024-07-24 19:51:28.632360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.632373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.877 [2024-07-24 19:51:28.632384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.877 [2024-07-24 19:51:28.632395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56288 len:8 PRP1 0x0 PRP2 0x0 00:22:17.877 [2024-07-24 19:51:28.632407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.632420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.877 [2024-07-24 19:51:28.632431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.877 [2024-07-24 19:51:28.632441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56296 len:8 PRP1 0x0 PRP2 0x0 00:22:17.877 [2024-07-24 19:51:28.632454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.632467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.877 [2024-07-24 19:51:28.632477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.877 [2024-07-24 19:51:28.632489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56304 len:8 PRP1 0x0 PRP2 0x0 00:22:17.877 [2024-07-24 19:51:28.632501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.632514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.877 [2024-07-24 19:51:28.632527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.877 [2024-07-24 19:51:28.632538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56312 len:8 PRP1 0x0 PRP2 0x0 00:22:17.877 [2024-07-24 19:51:28.632551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.877 [2024-07-24 19:51:28.632567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.877 [2024-07-24 19:51:28.632578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.877 [2024-07-24 19:51:28.632589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56320 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.632602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.632615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.632626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.632637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56328 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.632649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.632662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.632672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.632683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56336 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.632696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.632708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.632719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.632730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56344 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.632742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.632755] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.632766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.632777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56352 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.632790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.632803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.632813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.632824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56360 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.632836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.632849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.632860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.632871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56368 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.632883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.632896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.632906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.632917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56376 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.632933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.632946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.632957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.632968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56384 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.632980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.632992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.633003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.633014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56392 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.633026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.633039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.633049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.633060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56400 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.633072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.633085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.633095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.633106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56408 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.633119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.633131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.633142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.633153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56416 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.633166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.633178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.633189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.633200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56424 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.633212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.633234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.633250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.633262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56432 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.633274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.633288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.633298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.633313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56440 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.633326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.633338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.633349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.633360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56448 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.633373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.633385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.633396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.633407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56456 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.633419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.633432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.633442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.633453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55816 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.633465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.633478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:17.878 [2024-07-24 19:51:28.633489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:17.878 [2024-07-24 19:51:28.633499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55824 len:8 PRP1 0x0 PRP2 0x0 00:22:17.878 [2024-07-24 19:51:28.633512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.633577] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x136ab40 was disconnected and freed. reset controller. 00:22:17.878 [2024-07-24 19:51:28.633595] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:17.878 [2024-07-24 19:51:28.633628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.878 [2024-07-24 19:51:28.633646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.633661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.878 [2024-07-24 19:51:28.633674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.633687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.878 [2024-07-24 19:51:28.633700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.633713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:17.878 [2024-07-24 19:51:28.633726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:17.878 [2024-07-24 19:51:28.633739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:17.878 [2024-07-24 19:51:28.633781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x133a0f0 (9): Bad file descriptor 00:22:17.878 [2024-07-24 19:51:28.637733] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:17.878 [2024-07-24 19:51:28.785166] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:17.878 00:22:17.878 Latency(us) 00:22:17.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.878 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:17.878 Verification LBA range: start 0x0 length 0x4000 00:22:17.878 NVMe0n1 : 15.01 8143.38 31.81 1124.40 0.00 13783.22 831.34 16699.54 00:22:17.878 =================================================================================================================== 00:22:17.878 Total : 8143.38 31.81 1124.40 0.00 13783.22 831.34 16699.54 00:22:17.878 Received shutdown signal, test time was about 15.000000 seconds 00:22:17.878 00:22:17.878 Latency(us) 00:22:17.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.878 =================================================================================================================== 00:22:17.878 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.878 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:17.878 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:17.878 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:17.878 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1244981 00:22:17.878 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:17.878 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1244981 /var/tmp/bdevperf.sock 00:22:17.878 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@832 -- # '[' -z 1244981 ']' 00:22:17.878 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:17.878 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local max_retries=100 00:22:17.878 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:17.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:17.878 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@841 -- # xtrace_disable 00:22:17.878 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:17.878 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:22:17.878 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@865 -- # return 0 00:22:17.878 19:51:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:18.138 [2024-07-24 19:51:35.257278] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:18.138 19:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:18.405 [2024-07-24 19:51:35.550057] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:18.405 19:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:18.663 NVMe0n1 00:22:18.663 19:51:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.231 00:22:19.231 19:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:19.488 00:22:19.488 19:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:19.488 19:51:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:19.750 19:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:20.009 19:51:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:23.292 19:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:23.292 19:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:23.292 19:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1245665 00:22:23.292 19:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:23.292 19:51:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1245665 00:22:24.699 0 00:22:24.699 19:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:24.699 [2024-07-24 19:51:34.698698] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:22:24.699 [2024-07-24 19:51:34.698778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244981 ] 00:22:24.699 EAL: No free 2048 kB hugepages reported on node 1 00:22:24.699 [2024-07-24 19:51:34.756961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.699 [2024-07-24 19:51:34.863620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.699 [2024-07-24 19:51:37.318398] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:24.699 [2024-07-24 19:51:37.318480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.699 [2024-07-24 19:51:37.318502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.699 [2024-07-24 19:51:37.318518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.699 [2024-07-24 19:51:37.318531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.699 [2024-07-24 19:51:37.318545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.699 [2024-07-24 19:51:37.318570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.699 [2024-07-24 19:51:37.318583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:24.699 [2024-07-24 19:51:37.318597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:24.699 [2024-07-24 19:51:37.318610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:24.699 [2024-07-24 19:51:37.318662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:24.699 [2024-07-24 19:51:37.318692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d90f0 (9): Bad file descriptor 00:22:24.699 [2024-07-24 19:51:37.326772] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:24.699 Running I/O for 1 seconds... 00:22:24.699 00:22:24.699 Latency(us) 00:22:24.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:24.699 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:24.699 Verification LBA range: start 0x0 length 0x4000 00:22:24.699 NVMe0n1 : 1.01 8677.39 33.90 0.00 0.00 14662.73 2997.67 12524.66 00:22:24.699 =================================================================================================================== 00:22:24.699 Total : 8677.39 33.90 0.00 0.00 14662.73 2997.67 12524.66 00:22:24.699 19:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:24.699 19:51:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:24.699 19:51:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:25.265 19:51:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:25.265 19:51:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:25.265 19:51:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:25.523 19:51:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:28.808 19:51:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:28.808 19:51:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:28.808 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1244981 00:22:28.808 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@951 -- # '[' -z 1244981 ']' 00:22:28.808 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # kill -0 1244981 00:22:28.808 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # uname 00:22:28.808 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:22:28.808 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1244981 00:22:28.808 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:22:28.808 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:22:28.808 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1244981' 00:22:28.808 killing process with pid 1244981 00:22:28.808 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # kill 1244981 00:22:28.808 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@975 -- # wait 1244981 00:22:29.067 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:29.067 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:29.324 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:29.324 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:29.324 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:29.324 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # nvmfcleanup 00:22:29.324 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:29.324 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:29.324 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:29.324 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:29.324 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:29.324 rmmod nvme_tcp 00:22:29.324 rmmod nvme_fabrics 00:22:29.324 rmmod nvme_keyring 00:22:29.581 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:29.581 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:29.581 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:29.582 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # '[' -n 1242741 ']' 00:22:29.582 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # killprocess 1242741 00:22:29.582 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@951 -- # '[' -z 1242741 ']' 00:22:29.582 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # kill -0 1242741 00:22:29.582 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # uname 00:22:29.582 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:22:29.582 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1242741 00:22:29.582 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:22:29.582 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:22:29.582 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1242741' 00:22:29.582 killing process with pid 1242741 00:22:29.582 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # kill 1242741 00:22:29.582 19:51:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@975 -- # wait 1242741 00:22:29.841 19:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:22:29.841 19:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:22:29.841 19:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:22:29.841 19:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:29.841 19:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@282 -- # remove_spdk_ns 00:22:29.841 19:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.841 19:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.841 19:51:47 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.746 19:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:22:31.746 00:22:31.746 real 0m35.387s 00:22:31.746 user 2m4.201s 00:22:31.746 sys 0m6.309s 00:22:31.746 19:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # xtrace_disable 00:22:31.746 19:51:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:31.746 ************************************ 00:22:31.746 END TEST nvmf_failover 00:22:31.746 ************************************ 00:22:31.746 19:51:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:31.746 19:51:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:22:31.746 19:51:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:22:31.746 19:51:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.004 ************************************ 00:22:32.004 START TEST nvmf_host_discovery 00:22:32.004 ************************************ 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:32.004 * Looking for test storage... 00:22:32.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:32.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@452 -- # prepare_net_devs 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # local -g is_hw=no 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # remove_spdk_ns 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # xtrace_disable 00:22:32.004 19:51:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # pci_devs=() 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -a pci_devs 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # pci_net_devs=() 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # pci_drivers=() 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -A pci_drivers 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # net_devs=() 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # local -ga net_devs 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # e810=() 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # local -ga e810 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # x722=() 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # local -ga x722 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # mlx=() 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # local -ga mlx 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:33.919 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:33.919 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # [[ up == up ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:33.919 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # [[ up == up ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:33.919 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # is_hw=yes 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:22:33.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:33.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:22:33.919 00:22:33.919 --- 10.0.0.2 ping statistics --- 00:22:33.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.919 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:33.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:33.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:22:33.919 00:22:33.919 --- 10.0.0.1 ping statistics --- 00:22:33.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:33.919 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@426 -- # return 0 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:33.919 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:22:33.920 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:22:33.920 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:33.920 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:22:33.920 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:22:34.178 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:34.178 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:22:34.178 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@725 -- # xtrace_disable 00:22:34.178 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.178 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@485 -- # nvmfpid=1248368 00:22:34.178 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@486 -- # waitforlisten 1248368 00:22:34.178 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:34.178 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@832 -- # '[' -z 1248368 ']' 00:22:34.178 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.178 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local max_retries=100 00:22:34.178 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.178 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@841 -- # xtrace_disable 00:22:34.178 19:51:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:34.178 [2024-07-24 19:51:51.373020] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:22:34.178 [2024-07-24 19:51:51.373099] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.178 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.178 [2024-07-24 19:51:51.440498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.178 [2024-07-24 19:51:51.556897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.178 [2024-07-24 19:51:51.556956] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.178 [2024-07-24 19:51:51.556973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.178 [2024-07-24 19:51:51.556986] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.178 [2024-07-24 19:51:51.557012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.178 [2024-07-24 19:51:51.557042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.114 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:22:35.114 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@865 -- # return 0 00:22:35.114 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:22:35.114 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@731 -- # xtrace_disable 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.115 [2024-07-24 19:51:52.331427] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.115 [2024-07-24 19:51:52.339575] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.115 null0 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.115 null1 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1248522 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1248522 /tmp/host.sock 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@832 -- # '[' -z 1248522 ']' 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local rpc_addr=/tmp/host.sock 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local max_retries=100 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:35.115 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@841 -- # xtrace_disable 00:22:35.115 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.115 [2024-07-24 19:51:52.407723] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:22:35.115 [2024-07-24 19:51:52.407799] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248522 ] 00:22:35.115 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.115 [2024-07-24 19:51:52.464981] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.374 [2024-07-24 19:51:52.574572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@865 -- # return 0 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:35.374 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.633 [2024-07-24 19:51:52.969326] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:35.633 19:51:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_notification_count 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:35.893 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( notification_count == expected_count )) 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_subsystem_names 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # [[ '' == \n\v\m\e\0 ]] 00:22:35.894 19:51:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # sleep 1 00:22:36.463 [2024-07-24 19:51:53.749012] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:36.463 [2024-07-24 19:51:53.749043] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:36.463 [2024-07-24 19:51:53.749068] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:36.463 [2024-07-24 19:51:53.837348] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:36.721 [2024-07-24 19:51:53.899905] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:36.721 [2024-07-24 19:51:53.899931] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_subsystem_names 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_bdev_list 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_subsystem_paths nvme0 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # [[ 4420 == \4\4\2\0 ]] 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_notification_count 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( notification_count == expected_count )) 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:36.980 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:36.981 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:36.981 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:36.981 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:36.981 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_bdev_list 00:22:36.981 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:36.981 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:36.981 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:36.981 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:36.981 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:36.981 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_notification_count 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.239 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( notification_count == expected_count )) 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.498 [2024-07-24 19:51:54.638065] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:37.498 [2024-07-24 19:51:54.639194] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:37.498 [2024-07-24 19:51:54.639233] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_subsystem_names 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_bdev_list 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:37.498 [2024-07-24 19:51:54.725984] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_subsystem_paths nvme0 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:37.498 19:51:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # sleep 1 00:22:37.756 [2024-07-24 19:51:54.985270] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:37.756 [2024-07-24 19:51:54.985313] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:37.756 [2024-07-24 19:51:54.985323] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_subsystem_paths nvme0 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_notification_count 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( notification_count == expected_count )) 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.692 [2024-07-24 19:51:55.862337] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:38.692 [2024-07-24 19:51:55.862378] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:38.692 [2024-07-24 19:51:55.863317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.692 [2024-07-24 19:51:55.863373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.692 [2024-07-24 19:51:55.863398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.692 [2024-07-24 19:51:55.863414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.692 [2024-07-24 19:51:55.863428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.692 [2024-07-24 19:51:55.863441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.692 [2024-07-24 19:51:55.863455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:38.692 [2024-07-24 19:51:55.863468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:38.692 [2024-07-24 19:51:55.863482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb08c90 is same with the state(6) to be set 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_subsystem_names 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.692 [2024-07-24 19:51:55.873320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb08c90 (9): Bad file descriptor 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:38.692 [2024-07-24 19:51:55.883371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:38.692 [2024-07-24 19:51:55.883623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.692 [2024-07-24 19:51:55.883656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb08c90 with addr=10.0.0.2, port=4420 00:22:38.692 [2024-07-24 19:51:55.883674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb08c90 is same with the state(6) to be set 00:22:38.692 [2024-07-24 19:51:55.883700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb08c90 (9): Bad file descriptor 00:22:38.692 [2024-07-24 19:51:55.883723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.692 [2024-07-24 19:51:55.883738] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:38.692 [2024-07-24 19:51:55.883755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.692 [2024-07-24 19:51:55.883783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.692 [2024-07-24 19:51:55.893452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:38.692 [2024-07-24 19:51:55.893644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.692 [2024-07-24 19:51:55.893674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb08c90 with addr=10.0.0.2, port=4420 00:22:38.692 [2024-07-24 19:51:55.893692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb08c90 is same with the state(6) to be set 00:22:38.692 [2024-07-24 19:51:55.893716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb08c90 (9): Bad file descriptor 00:22:38.692 [2024-07-24 19:51:55.893754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.692 [2024-07-24 19:51:55.893773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:38.692 [2024-07-24 19:51:55.893788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.692 [2024-07-24 19:51:55.893809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.692 [2024-07-24 19:51:55.903539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:38.692 [2024-07-24 19:51:55.903736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.692 [2024-07-24 19:51:55.903766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb08c90 with addr=10.0.0.2, port=4420 00:22:38.692 [2024-07-24 19:51:55.903784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb08c90 is same with the state(6) to be set 00:22:38.692 [2024-07-24 19:51:55.903807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb08c90 (9): Bad file descriptor 00:22:38.692 [2024-07-24 19:51:55.903829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.692 [2024-07-24 19:51:55.903844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:38.692 [2024-07-24 19:51:55.903858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.692 [2024-07-24 19:51:55.903878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:38.692 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_bdev_list 00:22:38.693 [2024-07-24 19:51:55.913632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:38.693 [2024-07-24 19:51:55.913853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.693 [2024-07-24 19:51:55.913884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb08c90 with addr=10.0.0.2, port=4420 00:22:38.693 [2024-07-24 19:51:55.913902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb08c90 is same with the state(6) to be set 00:22:38.693 [2024-07-24 19:51:55.913931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb08c90 (9): Bad file descriptor 00:22:38.693 [2024-07-24 19:51:55.913969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.693 [2024-07-24 19:51:55.913989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:38.693 [2024-07-24 19:51:55.914003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.693 [2024-07-24 19:51:55.914025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:38.693 [2024-07-24 19:51:55.923710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:38.693 [2024-07-24 19:51:55.923922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.693 [2024-07-24 19:51:55.923950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb08c90 with addr=10.0.0.2, port=4420 00:22:38.693 [2024-07-24 19:51:55.923966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb08c90 is same with the state(6) to be set 00:22:38.693 [2024-07-24 19:51:55.923988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb08c90 (9): Bad file descriptor 00:22:38.693 [2024-07-24 19:51:55.924020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.693 [2024-07-24 19:51:55.924037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:38.693 [2024-07-24 19:51:55.924050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.693 [2024-07-24 19:51:55.924069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.693 [2024-07-24 19:51:55.933797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:38.693 [2024-07-24 19:51:55.933989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.693 [2024-07-24 19:51:55.934016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb08c90 with addr=10.0.0.2, port=4420 00:22:38.693 [2024-07-24 19:51:55.934032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb08c90 is same with the state(6) to be set 00:22:38.693 [2024-07-24 19:51:55.934054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb08c90 (9): Bad file descriptor 00:22:38.693 [2024-07-24 19:51:55.934097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.693 [2024-07-24 19:51:55.934116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:38.693 [2024-07-24 19:51:55.934130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.693 [2024-07-24 19:51:55.934148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:38.693 [2024-07-24 19:51:55.943884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:38.693 [2024-07-24 19:51:55.944076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:38.693 [2024-07-24 19:51:55.944106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb08c90 with addr=10.0.0.2, port=4420 00:22:38.693 [2024-07-24 19:51:55.944129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb08c90 is same with the state(6) to be set 00:22:38.693 [2024-07-24 19:51:55.944153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb08c90 (9): Bad file descriptor 00:22:38.693 [2024-07-24 19:51:55.944188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:38.693 [2024-07-24 19:51:55.944207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:38.693 [2024-07-24 19:51:55.944222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:38.693 [2024-07-24 19:51:55.944251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:38.693 [2024-07-24 19:51:55.949198] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:38.693 [2024-07-24 19:51:55.949230] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_subsystem_paths nvme0 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # [[ 4421 == \4\4\2\1 ]] 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:38.693 19:51:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_notification_count 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( notification_count == expected_count )) 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_subsystem_names 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:38.693 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:38.951 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # [[ '' == '' ]] 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_bdev_list 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # [[ '' == '' ]] 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local max=10 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( max-- )) 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # get_notification_count 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( notification_count == expected_count )) 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # return 0 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:38.952 19:51:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:39.883 [2024-07-24 19:51:57.197701] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:39.883 [2024-07-24 19:51:57.197734] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:39.883 [2024-07-24 19:51:57.197760] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:40.141 [2024-07-24 19:51:57.285027] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:40.141 [2024-07-24 19:51:57.394509] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:40.141 [2024-07-24 19:51:57.394570] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # local es=0 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@639 -- # local arg=rpc_cmd 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@643 -- # type -t rpc_cmd 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.141 request: 00:22:40.141 { 00:22:40.141 "name": "nvme", 00:22:40.141 "trtype": "tcp", 00:22:40.141 "traddr": "10.0.0.2", 00:22:40.141 "adrfam": "ipv4", 00:22:40.141 "trsvcid": "8009", 00:22:40.141 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:40.141 "wait_for_attach": true, 00:22:40.141 "method": "bdev_nvme_start_discovery", 00:22:40.141 "req_id": 1 00:22:40.141 } 00:22:40.141 Got JSON-RPC error response 00:22:40.141 response: 00:22:40.141 { 00:22:40.141 "code": -17, 00:22:40.141 "message": "File exists" 00:22:40.141 } 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # es=1 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:40.141 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # local es=0 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@639 -- # local arg=rpc_cmd 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@643 -- # type -t rpc_cmd 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.142 request: 00:22:40.142 { 00:22:40.142 "name": "nvme_second", 00:22:40.142 "trtype": "tcp", 00:22:40.142 "traddr": "10.0.0.2", 00:22:40.142 "adrfam": "ipv4", 00:22:40.142 "trsvcid": "8009", 00:22:40.142 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:40.142 "wait_for_attach": true, 00:22:40.142 "method": "bdev_nvme_start_discovery", 00:22:40.142 "req_id": 1 00:22:40.142 } 00:22:40.142 Got JSON-RPC error response 00:22:40.142 response: 00:22:40.142 { 00:22:40.142 "code": -17, 00:22:40.142 "message": "File exists" 00:22:40.142 } 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # es=1 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.142 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # local es=0 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@639 -- # local arg=rpc_cmd 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@643 -- # type -t rpc_cmd 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:40.400 19:51:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:41.360 [2024-07-24 19:51:58.606046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:41.360 [2024-07-24 19:51:58.606121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb474a0 with addr=10.0.0.2, port=8010 00:22:41.360 [2024-07-24 19:51:58.606156] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:41.360 [2024-07-24 19:51:58.606173] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:41.360 [2024-07-24 19:51:58.606189] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:42.293 [2024-07-24 19:51:59.608403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:42.293 [2024-07-24 19:51:59.608436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb1a320 with addr=10.0.0.2, port=8010 00:22:42.293 [2024-07-24 19:51:59.608457] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:42.293 [2024-07-24 19:51:59.608470] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:42.293 [2024-07-24 19:51:59.608482] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:43.667 [2024-07-24 19:52:00.610644] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:43.667 request: 00:22:43.667 { 00:22:43.667 "name": "nvme_second", 00:22:43.667 "trtype": "tcp", 00:22:43.667 "traddr": "10.0.0.2", 00:22:43.667 "adrfam": "ipv4", 00:22:43.667 "trsvcid": "8010", 00:22:43.667 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:43.667 "wait_for_attach": false, 00:22:43.667 "attach_timeout_ms": 3000, 00:22:43.667 "method": "bdev_nvme_start_discovery", 00:22:43.667 "req_id": 1 00:22:43.667 } 00:22:43.667 Got JSON-RPC error response 00:22:43.667 response: 00:22:43.667 { 00:22:43.667 "code": -110, 00:22:43.667 "message": "Connection timed out" 00:22:43.667 } 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # es=1 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@562 -- # xtrace_disable 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1248522 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # nvmfcleanup 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:43.667 rmmod nvme_tcp 00:22:43.667 rmmod nvme_fabrics 00:22:43.667 rmmod nvme_keyring 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # '[' -n 1248368 ']' 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@494 -- # killprocess 1248368 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' -z 1248368 ']' 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # kill -0 1248368 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # uname 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1248368 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1248368' 00:22:43.667 killing process with pid 1248368 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # kill 1248368 00:22:43.667 19:52:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@975 -- # wait 1248368 00:22:43.667 19:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:22:43.667 19:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:22:43.667 19:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:22:43.667 19:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:43.667 19:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@282 -- # remove_spdk_ns 00:22:43.667 19:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.667 19:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.667 19:52:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:22:46.195 00:22:46.195 real 0m13.952s 00:22:46.195 user 0m20.088s 00:22:46.195 sys 0m2.887s 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # xtrace_disable 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:46.195 ************************************ 00:22:46.195 END TEST nvmf_host_discovery 00:22:46.195 ************************************ 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.195 ************************************ 00:22:46.195 START TEST nvmf_host_multipath_status 00:22:46.195 ************************************ 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:46.195 * Looking for test storage... 00:22:46.195 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:46.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@452 -- # prepare_net_devs 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # local -g is_hw=no 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # remove_spdk_ns 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # xtrace_disable 00:22:46.195 19:52:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # pci_devs=() 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -a pci_devs 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # pci_net_devs=() 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # pci_drivers=() 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -A pci_drivers 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # net_devs=() 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # local -ga net_devs 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # e810=() 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # local -ga e810 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # x722=() 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # local -ga x722 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # mlx=() 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # local -ga mlx 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.091 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:48.092 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:48.092 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # [[ up == up ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:48.092 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # [[ up == up ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:48.092 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # is_hw=yes 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:22:48.092 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:22:48.093 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.093 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:22:48.093 00:22:48.093 --- 10.0.0.2 ping statistics --- 00:22:48.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.093 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:48.093 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.093 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:22:48.093 00:22:48.093 --- 10.0.0.1 ping statistics --- 00:22:48.093 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.093 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # return 0 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@725 -- # xtrace_disable 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@485 -- # nvmfpid=1251548 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@486 -- # waitforlisten 1251548 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # '[' -z 1251548 ']' 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local max_retries=100 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@841 -- # xtrace_disable 00:22:48.093 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:48.093 [2024-07-24 19:52:05.228607] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:22:48.093 [2024-07-24 19:52:05.228700] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.093 EAL: No free 2048 kB hugepages reported on node 1 00:22:48.093 [2024-07-24 19:52:05.298164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:48.093 [2024-07-24 19:52:05.415704] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.093 [2024-07-24 19:52:05.415756] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.093 [2024-07-24 19:52:05.415772] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.093 [2024-07-24 19:52:05.415787] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.093 [2024-07-24 19:52:05.415799] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.093 [2024-07-24 19:52:05.415865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.093 [2024-07-24 19:52:05.415872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.350 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:22:48.350 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@865 -- # return 0 00:22:48.350 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:22:48.350 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@731 -- # xtrace_disable 00:22:48.350 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:48.350 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.350 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1251548 00:22:48.350 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:48.607 [2024-07-24 19:52:05.778363] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.607 19:52:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:48.865 Malloc0 00:22:48.865 19:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:49.122 19:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:49.379 19:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.636 [2024-07-24 19:52:06.783228] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.636 19:52:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:49.894 [2024-07-24 19:52:07.039909] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:49.894 19:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1251831 00:22:49.894 19:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:49.894 19:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1251831 /var/tmp/bdevperf.sock 00:22:49.894 19:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:49.894 19:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # '[' -z 1251831 ']' 00:22:49.894 19:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:49.894 19:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local max_retries=100 00:22:49.894 19:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:49.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:49.894 19:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@841 -- # xtrace_disable 00:22:49.894 19:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:50.152 19:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:22:50.152 19:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@865 -- # return 0 00:22:50.152 19:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:50.409 19:52:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:50.975 Nvme0n1 00:22:50.975 19:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:51.540 Nvme0n1 00:22:51.540 19:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:51.540 19:52:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:53.437 19:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:53.437 19:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:53.694 19:52:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:53.951 19:52:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:54.883 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:54.883 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:54.883 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:54.883 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:55.140 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.140 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:55.140 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.140 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:55.398 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:55.398 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:55.398 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.398 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:55.655 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.655 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:55.655 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.655 19:52:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:55.912 19:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:55.912 19:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:55.912 19:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:55.912 19:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:56.178 19:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.179 19:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:56.179 19:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:56.179 19:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:56.437 19:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:56.437 19:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:56.437 19:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:56.695 19:52:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:56.952 19:52:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:57.916 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:57.916 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:57.916 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:57.916 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:58.174 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:58.174 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:58.174 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.174 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:58.431 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.432 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:58.432 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.432 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:58.689 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.689 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:58.689 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.689 19:52:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:58.946 19:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:58.946 19:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:58.946 19:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:58.946 19:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:59.203 19:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.203 19:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:59.203 19:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:59.203 19:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:59.461 19:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:59.461 19:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:59.461 19:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:59.718 19:52:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:59.975 19:52:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:00.907 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:00.907 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:00.907 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:00.907 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:01.164 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.164 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:01.164 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.164 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:01.422 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:01.422 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:01.422 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.422 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:01.679 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.679 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:01.679 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.679 19:52:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:01.937 19:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:01.937 19:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:01.937 19:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:01.937 19:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:02.194 19:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.194 19:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:02.194 19:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:02.194 19:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:02.451 19:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:02.451 19:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:02.451 19:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:02.848 19:52:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:03.106 19:52:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:04.036 19:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:04.036 19:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:04.036 19:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.036 19:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:04.293 19:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.293 19:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:04.293 19:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.293 19:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:04.550 19:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:04.550 19:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:04.550 19:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.550 19:52:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:04.807 19:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:04.807 19:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:04.807 19:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:04.807 19:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:05.064 19:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.064 19:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:05.064 19:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.064 19:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:05.322 19:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:05.322 19:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:05.322 19:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:05.322 19:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:05.579 19:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:05.579 19:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:05.579 19:52:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:05.837 19:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:06.094 19:52:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:07.026 19:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:07.026 19:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:07.026 19:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.026 19:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:07.283 19:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.283 19:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:07.283 19:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.283 19:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:07.541 19:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.541 19:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:07.541 19:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.541 19:52:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:07.797 19:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.797 19:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:07.797 19:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.797 19:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:08.054 19:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.054 19:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:08.054 19:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.054 19:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:08.311 19:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:08.311 19:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:08.311 19:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.311 19:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:08.569 19:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:08.569 19:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:08.569 19:52:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:08.826 19:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:09.084 19:52:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:10.016 19:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:10.016 19:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:10.016 19:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.016 19:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:10.273 19:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:10.273 19:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:10.273 19:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.273 19:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:10.530 19:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.530 19:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:10.530 19:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.530 19:52:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:10.789 19:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.789 19:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:10.789 19:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.789 19:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:11.047 19:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.047 19:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:11.047 19:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.047 19:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:11.304 19:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:11.305 19:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:11.305 19:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.305 19:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:11.562 19:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.562 19:52:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:11.820 19:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:11.820 19:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:12.132 19:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:12.413 19:52:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:13.346 19:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:13.346 19:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:13.346 19:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.346 19:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:13.603 19:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.603 19:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:13.603 19:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.603 19:52:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:13.860 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.860 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:13.860 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.860 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:14.118 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.118 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:14.118 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.118 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:14.375 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.375 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:14.375 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.375 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:14.633 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.633 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:14.633 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.633 19:52:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:14.890 19:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.890 19:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:14.890 19:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:15.147 19:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:15.405 19:52:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:16.335 19:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:16.335 19:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:16.336 19:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.336 19:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:16.593 19:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:16.593 19:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:16.593 19:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.593 19:52:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:16.851 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.851 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:16.851 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.851 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:17.108 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.108 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:17.109 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.109 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:17.382 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.382 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:17.382 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.382 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:17.639 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.639 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:17.639 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.639 19:52:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:17.897 19:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.897 19:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:17.897 19:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:18.154 19:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:18.411 19:52:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:19.343 19:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:19.343 19:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:19.343 19:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.343 19:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:19.601 19:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.601 19:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:19.601 19:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.601 19:52:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:19.858 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.858 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:19.858 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.858 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:20.116 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.116 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:20.116 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.116 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:20.373 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.373 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:20.373 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.373 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:20.630 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.630 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:20.630 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.630 19:52:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:20.887 19:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.887 19:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:20.887 19:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:21.144 19:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:21.401 19:52:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:22.332 19:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:22.332 19:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:22.332 19:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.332 19:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:22.590 19:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.590 19:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:22.590 19:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.590 19:52:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:22.847 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:22.847 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:22.847 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.847 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:23.105 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.105 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:23.105 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.105 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:23.362 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.362 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:23.362 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.362 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:23.619 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.619 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:23.619 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.619 19:52:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:23.876 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:23.876 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1251831 00:23:23.876 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' -z 1251831 ']' 00:23:23.876 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # kill -0 1251831 00:23:23.876 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # uname 00:23:23.876 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:23:23.876 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1251831 00:23:23.876 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # process_name=reactor_2 00:23:23.876 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@961 -- # '[' reactor_2 = sudo ']' 00:23:23.876 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1251831' 00:23:23.876 killing process with pid 1251831 00:23:23.876 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # kill 1251831 00:23:23.876 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@975 -- # wait 1251831 00:23:24.136 Connection closed with partial response: 00:23:24.136 00:23:24.136 00:23:24.136 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1251831 00:23:24.136 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:24.136 [2024-07-24 19:52:07.103641] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:23:24.136 [2024-07-24 19:52:07.103720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251831 ] 00:23:24.136 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.136 [2024-07-24 19:52:07.161767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.136 [2024-07-24 19:52:07.273829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.136 Running I/O for 90 seconds... 00:23:24.136 [2024-07-24 19:52:23.015094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.136 [2024-07-24 19:52:23.015162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:24.136 [2024-07-24 19:52:23.015223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.136 [2024-07-24 19:52:23.015252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:24.136 [2024-07-24 19:52:23.015279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.136 [2024-07-24 19:52:23.015297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:24.136 [2024-07-24 19:52:23.015320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.136 [2024-07-24 19:52:23.015336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:24.136 [2024-07-24 19:52:23.015358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.136 [2024-07-24 19:52:23.015374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:24.136 [2024-07-24 19:52:23.015396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.136 [2024-07-24 19:52:23.015411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:24.136 [2024-07-24 19:52:23.015433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.136 [2024-07-24 19:52:23.015449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:24.136 [2024-07-24 19:52:23.015472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.136 [2024-07-24 19:52:23.015489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:24.136 [2024-07-24 19:52:23.016577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.136 [2024-07-24 19:52:23.016601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:24.136 [2024-07-24 19:52:23.016630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.136 [2024-07-24 19:52:23.016648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:24.136 [2024-07-24 19:52:23.016671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.136 [2024-07-24 19:52:23.016699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:24.136 [2024-07-24 19:52:23.016723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.136 [2024-07-24 19:52:23.016739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:24.136 [2024-07-24 19:52:23.016762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.136 [2024-07-24 19:52:23.016778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.016800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.137 [2024-07-24 19:52:23.016816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.016838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.016854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.016876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.016892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.016914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.016930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.016968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.016984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.137 [2024-07-24 19:52:23.017585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.017957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.017980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.018010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.018034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.018049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.018071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.018086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.018109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.018124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.018147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.018162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.018184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.018199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.018222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.018261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.018292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.018309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.018332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.018348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.018371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.137 [2024-07-24 19:52:23.018387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:24.137 [2024-07-24 19:52:23.018410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.018425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.018449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.018465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.018489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.018504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.018527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.018543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.018566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.018582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.018606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.018621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.018645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.018661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.018685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.018700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.018724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.018739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.018763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.018783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.018807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.018838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.018863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.018879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.018902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.018917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.018940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.018956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.018979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.018994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.019983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.019999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.020026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.020042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.020068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.020084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:24.138 [2024-07-24 19:52:23.020111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.138 [2024-07-24 19:52:23.020126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.020971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.020987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-07-24 19:52:23.021161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.139 [2024-07-24 19:52:23.021204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.139 [2024-07-24 19:52:23.021832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:24.139 [2024-07-24 19:52:23.021860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:23.021877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:23.021904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:23.021921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:23.021949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:23.021966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.587401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:38.587460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.587523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:38.587559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.587592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:38.587623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.587654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:38.587671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.587692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:38.587720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.587743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:38.587759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.587781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:38.587797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.587818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-07-24 19:52:38.587834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.587855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-07-24 19:52:38.587871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.587893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:38.587909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.587930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:38.587945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.587967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:38.587983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.588004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-07-24 19:52:38.588020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.588041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-07-24 19:52:38.588057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.588079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:24144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:38.588095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.588116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:38.588132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.588153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.140 [2024-07-24 19:52:38.588169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.593223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-07-24 19:52:38.593259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.593289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-07-24 19:52:38.593313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.593335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-07-24 19:52:38.593351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.593372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-07-24 19:52:38.593389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.593410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-07-24 19:52:38.593426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.593447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-07-24 19:52:38.593462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:24.140 [2024-07-24 19:52:38.593485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:24.140 [2024-07-24 19:52:38.593501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:24.140 Received shutdown signal, test time was about 32.323051 seconds 00:23:24.140 00:23:24.140 Latency(us) 00:23:24.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.140 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:24.140 Verification LBA range: start 0x0 length 0x4000 00:23:24.140 Nvme0n1 : 32.32 7850.50 30.67 0.00 0.00 16277.64 359.54 4026531.84 00:23:24.140 =================================================================================================================== 00:23:24.140 Total : 7850.50 30.67 0.00 0.00 16277.64 359.54 4026531.84 00:23:24.140 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # nvmfcleanup 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:24.398 rmmod nvme_tcp 00:23:24.398 rmmod nvme_fabrics 00:23:24.398 rmmod nvme_keyring 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # '[' -n 1251548 ']' 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@494 -- # killprocess 1251548 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' -z 1251548 ']' 00:23:24.398 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # kill -0 1251548 00:23:24.656 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # uname 00:23:24.656 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:23:24.656 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1251548 00:23:24.656 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:23:24.656 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:23:24.656 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1251548' 00:23:24.656 killing process with pid 1251548 00:23:24.656 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # kill 1251548 00:23:24.656 19:52:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@975 -- # wait 1251548 00:23:24.914 19:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:23:24.914 19:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:23:24.914 19:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:23:24.914 19:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.914 19:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@282 -- # remove_spdk_ns 00:23:24.914 19:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.914 19:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.914 19:52:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.811 19:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:23:26.811 00:23:26.811 real 0m40.994s 00:23:26.811 user 2m4.003s 00:23:26.811 sys 0m10.283s 00:23:26.811 19:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # xtrace_disable 00:23:26.811 19:52:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:26.811 ************************************ 00:23:26.811 END TEST nvmf_host_multipath_status 00:23:26.811 ************************************ 00:23:26.811 19:52:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:26.811 19:52:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:23:26.811 19:52:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:23:26.811 19:52:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.811 ************************************ 00:23:26.811 START TEST nvmf_discovery_remove_ifc 00:23:26.811 ************************************ 00:23:26.811 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:27.098 * Looking for test storage... 00:23:27.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.098 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:27.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@452 -- # prepare_net_devs 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # local -g is_hw=no 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # remove_spdk_ns 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # xtrace_disable 00:23:27.099 19:52:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # pci_devs=() 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -a pci_devs 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # pci_net_devs=() 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # pci_drivers=() 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -A pci_drivers 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # net_devs=() 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # local -ga net_devs 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # e810=() 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # local -ga e810 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # x722=() 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # local -ga x722 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # mlx=() 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # local -ga mlx 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:28.996 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:28.996 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:28.996 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # [[ up == up ]] 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:28.997 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # [[ up == up ]] 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:28.997 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # is_hw=yes 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:23:28.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:28.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:23:28.997 00:23:28.997 --- 10.0.0.2 ping statistics --- 00:23:28.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.997 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:28.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:28.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:23:28.997 00:23:28.997 --- 10.0.0.1 ping statistics --- 00:23:28.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:28.997 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # return 0 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@725 -- # xtrace_disable 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@485 -- # nvmfpid=1257948 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@486 -- # waitforlisten 1257948 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # '[' -z 1257948 ']' 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local max_retries=100 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@841 -- # xtrace_disable 00:23:28.997 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:28.997 [2024-07-24 19:52:46.361283] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:23:28.997 [2024-07-24 19:52:46.361381] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.255 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.255 [2024-07-24 19:52:46.429545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.255 [2024-07-24 19:52:46.537124] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.255 [2024-07-24 19:52:46.537183] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.255 [2024-07-24 19:52:46.537210] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.255 [2024-07-24 19:52:46.537222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.255 [2024-07-24 19:52:46.537231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.255 [2024-07-24 19:52:46.537285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@865 -- # return 0 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@731 -- # xtrace_disable 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.513 [2024-07-24 19:52:46.694879] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.513 [2024-07-24 19:52:46.703099] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:29.513 null0 00:23:29.513 [2024-07-24 19:52:46.735021] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1258061 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1258061 /tmp/host.sock 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # '[' -z 1258061 ']' 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local rpc_addr=/tmp/host.sock 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local max_retries=100 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:29.513 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@841 -- # xtrace_disable 00:23:29.513 19:52:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.513 [2024-07-24 19:52:46.799824] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:23:29.513 [2024-07-24 19:52:46.799905] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1258061 ] 00:23:29.513 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.513 [2024-07-24 19:52:46.861035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.770 [2024-07-24 19:52:46.977636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.770 19:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:23:29.770 19:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@865 -- # return 0 00:23:29.771 19:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:29.771 19:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:29.771 19:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:29.771 19:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:29.771 19:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:29.771 19:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:29.771 19:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:29.771 19:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.028 19:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:30.028 19:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:30.028 19:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:30.028 19:52:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:30.960 [2024-07-24 19:52:48.224383] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:30.960 [2024-07-24 19:52:48.224418] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:30.960 [2024-07-24 19:52:48.224440] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:30.960 [2024-07-24 19:52:48.310754] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:31.217 [2024-07-24 19:52:48.536083] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:31.217 [2024-07-24 19:52:48.536146] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:31.217 [2024-07-24 19:52:48.536195] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:31.217 [2024-07-24 19:52:48.536220] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:31.217 [2024-07-24 19:52:48.536264] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:31.217 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:31.217 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:31.217 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:31.217 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.217 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:31.217 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:31.217 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:31.217 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:31.217 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:31.217 [2024-07-24 19:52:48.541324] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x9dc900 was disconnected and freed. delete nvme_qpair. 00:23:31.217 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:31.217 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:31.217 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:31.217 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:31.474 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:31.474 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:31.474 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.474 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:31.474 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:31.474 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:31.474 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:31.474 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:31.474 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:31.474 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:31.474 19:52:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:32.407 19:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:32.407 19:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.407 19:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:32.407 19:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:32.407 19:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:32.407 19:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:32.407 19:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:32.407 19:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:32.407 19:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:32.407 19:52:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:33.340 19:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:33.598 19:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:33.598 19:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:33.598 19:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:33.598 19:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:33.598 19:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:33.598 19:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:33.598 19:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:33.598 19:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:33.598 19:52:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:34.531 19:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:34.531 19:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:34.531 19:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:34.531 19:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:34.531 19:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:34.531 19:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:34.531 19:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:34.531 19:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:34.531 19:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:34.531 19:52:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:35.462 19:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:35.462 19:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.462 19:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:35.462 19:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:35.462 19:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:35.462 19:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:35.462 19:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:35.462 19:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:35.719 19:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:35.719 19:52:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:36.650 19:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:36.650 19:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.650 19:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:36.650 19:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:36.650 19:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:36.650 19:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:36.650 19:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:36.650 19:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:36.650 19:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:36.650 19:52:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:36.650 [2024-07-24 19:52:53.976953] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:36.650 [2024-07-24 19:52:53.977026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.650 [2024-07-24 19:52:53.977050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.650 [2024-07-24 19:52:53.977070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.650 [2024-07-24 19:52:53.977086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.650 [2024-07-24 19:52:53.977102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.650 [2024-07-24 19:52:53.977116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.650 [2024-07-24 19:52:53.977132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.650 [2024-07-24 19:52:53.977147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.650 [2024-07-24 19:52:53.977163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.650 [2024-07-24 19:52:53.977178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.650 [2024-07-24 19:52:53.977193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3390 is same with the state(6) to be set 00:23:36.650 [2024-07-24 19:52:53.986970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a3390 (9): Bad file descriptor 00:23:36.650 [2024-07-24 19:52:53.997018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:37.580 19:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.580 19:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.580 19:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.580 19:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:37.580 19:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.580 19:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:37.581 19:52:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.837 [2024-07-24 19:52:55.006274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:37.837 [2024-07-24 19:52:55.006325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9a3390 with addr=10.0.0.2, port=4420 00:23:37.837 [2024-07-24 19:52:55.006347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a3390 is same with the state(6) to be set 00:23:37.837 [2024-07-24 19:52:55.006381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a3390 (9): Bad file descriptor 00:23:37.837 [2024-07-24 19:52:55.006797] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:37.837 [2024-07-24 19:52:55.006842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:37.837 [2024-07-24 19:52:55.006862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:37.837 [2024-07-24 19:52:55.006879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:37.837 [2024-07-24 19:52:55.006903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:37.837 [2024-07-24 19:52:55.006922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:37.837 19:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:37.837 19:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:37.837 19:52:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:38.768 [2024-07-24 19:52:56.009415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:38.768 [2024-07-24 19:52:56.009443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:38.768 [2024-07-24 19:52:56.009457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:38.768 [2024-07-24 19:52:56.009469] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:23:38.768 [2024-07-24 19:52:56.009488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:38.768 [2024-07-24 19:52:56.009534] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:38.768 [2024-07-24 19:52:56.009573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.768 [2024-07-24 19:52:56.009596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.768 [2024-07-24 19:52:56.009615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.768 [2024-07-24 19:52:56.009630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.768 [2024-07-24 19:52:56.009646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.768 [2024-07-24 19:52:56.009660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.768 [2024-07-24 19:52:56.009676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.768 [2024-07-24 19:52:56.009690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.768 [2024-07-24 19:52:56.009716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:38.768 [2024-07-24 19:52:56.009731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:38.768 [2024-07-24 19:52:56.009745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:38.768 [2024-07-24 19:52:56.010093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9a27f0 (9): Bad file descriptor 00:23:38.768 [2024-07-24 19:52:56.011113] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:38.768 [2024-07-24 19:52:56.011138] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:38.768 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:39.025 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:39.025 19:52:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:39.956 19:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:39.956 19:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:39.956 19:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:39.956 19:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:39.956 19:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:39.956 19:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.956 19:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:39.956 19:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:39.956 19:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:39.956 19:52:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:40.887 [2024-07-24 19:52:58.069027] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:40.887 [2024-07-24 19:52:58.069059] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:40.887 [2024-07-24 19:52:58.069085] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:40.887 [2024-07-24 19:52:58.196513] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:40.887 19:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:40.887 19:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:40.887 19:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:40.887 19:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:40.887 19:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:40.887 19:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:40.887 19:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:40.887 19:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:40.887 19:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:40.887 19:52:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:41.144 [2024-07-24 19:52:58.423079] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:41.144 [2024-07-24 19:52:58.423129] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:41.144 [2024-07-24 19:52:58.423168] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:41.144 [2024-07-24 19:52:58.423193] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:41.144 [2024-07-24 19:52:58.423208] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:41.144 [2024-07-24 19:52:58.427601] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x9e5e10 was disconnected and freed. delete nvme_qpair. 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1258061 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' -z 1258061 ']' 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # kill -0 1258061 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # uname 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1258061 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1258061' 00:23:42.075 killing process with pid 1258061 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # kill 1258061 00:23:42.075 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@975 -- # wait 1258061 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # nvmfcleanup 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:42.333 rmmod nvme_tcp 00:23:42.333 rmmod nvme_fabrics 00:23:42.333 rmmod nvme_keyring 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # '[' -n 1257948 ']' 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@494 -- # killprocess 1257948 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' -z 1257948 ']' 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # kill -0 1257948 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # uname 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1257948 00:23:42.333 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:23:42.334 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:23:42.334 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1257948' 00:23:42.334 killing process with pid 1257948 00:23:42.334 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # kill 1257948 00:23:42.334 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@975 -- # wait 1257948 00:23:42.900 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:23:42.900 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:23:42.900 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:23:42.900 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:42.900 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@282 -- # remove_spdk_ns 00:23:42.900 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.900 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.900 19:52:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:23:44.840 00:23:44.840 real 0m17.851s 00:23:44.840 user 0m26.033s 00:23:44.840 sys 0m3.039s 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # xtrace_disable 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.840 ************************************ 00:23:44.840 END TEST nvmf_discovery_remove_ifc 00:23:44.840 ************************************ 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.840 ************************************ 00:23:44.840 START TEST nvmf_identify_kernel_target 00:23:44.840 ************************************ 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:44.840 * Looking for test storage... 00:23:44.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.840 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:44.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # prepare_net_devs 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # local -g is_hw=no 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # remove_spdk_ns 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # xtrace_disable 00:23:44.841 19:53:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # pci_devs=() 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -a pci_devs 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # pci_net_devs=() 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # pci_drivers=() 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -A pci_drivers 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # net_devs=() 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # local -ga net_devs 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # e810=() 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # local -ga e810 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # x722=() 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # local -ga x722 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # mlx=() 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # local -ga mlx 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.737 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:46.996 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:46.996 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # [[ up == up ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:46.996 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # [[ up == up ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:46.996 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # is_hw=yes 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:46.996 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:23:46.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:46.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:23:46.997 00:23:46.997 --- 10.0.0.2 ping statistics --- 00:23:46.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.997 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:46.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:46.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:23:46.997 00:23:46.997 --- 10.0.0.1 ping statistics --- 00:23:46.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:46.997 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # return 0 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # local ip 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@746 -- # ip_candidates=() 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@746 -- # local -A ip_candidates 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@638 -- # nvmet=/sys/kernel/config/nvmet 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@640 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@643 -- # local block nvme 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ ! -e /sys/module/nvmet ]] 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@646 -- # modprobe nvmet 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@649 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:46.997 19:53:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:47.935 Waiting for block devices as requested 00:23:48.193 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:48.193 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:48.193 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:48.451 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:48.451 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:48.451 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:48.709 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:48.710 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:48.710 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:48.710 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:48.967 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:48.967 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:48.967 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:48.967 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:49.225 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:49.225 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:49.225 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # is_block_zoned nvme0n1 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # local device=nvme0n1 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1666 -- # [[ none != none ]] 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@657 -- # block_in_use nvme0n1 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:49.483 No valid GPT data, bailing 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@657 -- # nvme=/dev/nvme0n1 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # [[ -b /dev/nvme0n1 ]] 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 1 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo /dev/nvme0n1 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 1 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # echo 10.0.0.1 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # echo tcp 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # echo 4420 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # echo ipv4 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:49.483 00:23:49.483 Discovery Log Number of Records 2, Generation counter 2 00:23:49.483 =====Discovery Log Entry 0====== 00:23:49.483 trtype: tcp 00:23:49.483 adrfam: ipv4 00:23:49.483 subtype: current discovery subsystem 00:23:49.483 treq: not specified, sq flow control disable supported 00:23:49.483 portid: 1 00:23:49.483 trsvcid: 4420 00:23:49.483 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:49.483 traddr: 10.0.0.1 00:23:49.483 eflags: none 00:23:49.483 sectype: none 00:23:49.483 =====Discovery Log Entry 1====== 00:23:49.483 trtype: tcp 00:23:49.483 adrfam: ipv4 00:23:49.483 subtype: nvme subsystem 00:23:49.483 treq: not specified, sq flow control disable supported 00:23:49.483 portid: 1 00:23:49.483 trsvcid: 4420 00:23:49.483 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:49.483 traddr: 10.0.0.1 00:23:49.483 eflags: none 00:23:49.483 sectype: none 00:23:49.483 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:49.483 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:49.483 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.743 ===================================================== 00:23:49.743 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:49.743 ===================================================== 00:23:49.743 Controller Capabilities/Features 00:23:49.743 ================================ 00:23:49.743 Vendor ID: 0000 00:23:49.743 Subsystem Vendor ID: 0000 00:23:49.743 Serial Number: 3444090bc0696307833f 00:23:49.743 Model Number: Linux 00:23:49.743 Firmware Version: 6.7.0-68 00:23:49.743 Recommended Arb Burst: 0 00:23:49.743 IEEE OUI Identifier: 00 00 00 00:23:49.743 Multi-path I/O 00:23:49.743 May have multiple subsystem ports: No 00:23:49.743 May have multiple controllers: No 00:23:49.743 Associated with SR-IOV VF: No 00:23:49.743 Max Data Transfer Size: Unlimited 00:23:49.743 Max Number of Namespaces: 0 00:23:49.743 Max Number of I/O Queues: 1024 00:23:49.743 NVMe Specification Version (VS): 1.3 00:23:49.743 NVMe Specification Version (Identify): 1.3 00:23:49.743 Maximum Queue Entries: 1024 00:23:49.743 Contiguous Queues Required: No 00:23:49.743 Arbitration Mechanisms Supported 00:23:49.743 Weighted Round Robin: Not Supported 00:23:49.743 Vendor Specific: Not Supported 00:23:49.743 Reset Timeout: 7500 ms 00:23:49.743 Doorbell Stride: 4 bytes 00:23:49.743 NVM Subsystem Reset: Not Supported 00:23:49.743 Command Sets Supported 00:23:49.743 NVM Command Set: Supported 00:23:49.743 Boot Partition: Not Supported 00:23:49.743 Memory Page Size Minimum: 4096 bytes 00:23:49.743 Memory Page Size Maximum: 4096 bytes 00:23:49.743 Persistent Memory Region: Not Supported 00:23:49.743 Optional Asynchronous Events Supported 00:23:49.743 Namespace Attribute Notices: Not Supported 00:23:49.743 Firmware Activation Notices: Not Supported 00:23:49.743 ANA Change Notices: Not Supported 00:23:49.743 PLE Aggregate Log Change Notices: Not Supported 00:23:49.743 LBA Status Info Alert Notices: Not Supported 00:23:49.743 EGE Aggregate Log Change Notices: Not Supported 00:23:49.743 Normal NVM Subsystem Shutdown event: Not Supported 00:23:49.743 Zone Descriptor Change Notices: Not Supported 00:23:49.743 Discovery Log Change Notices: Supported 00:23:49.743 Controller Attributes 00:23:49.743 128-bit Host Identifier: Not Supported 00:23:49.743 Non-Operational Permissive Mode: Not Supported 00:23:49.743 NVM Sets: Not Supported 00:23:49.743 Read Recovery Levels: Not Supported 00:23:49.743 Endurance Groups: Not Supported 00:23:49.743 Predictable Latency Mode: Not Supported 00:23:49.743 Traffic Based Keep ALive: Not Supported 00:23:49.743 Namespace Granularity: Not Supported 00:23:49.743 SQ Associations: Not Supported 00:23:49.743 UUID List: Not Supported 00:23:49.743 Multi-Domain Subsystem: Not Supported 00:23:49.743 Fixed Capacity Management: Not Supported 00:23:49.743 Variable Capacity Management: Not Supported 00:23:49.743 Delete Endurance Group: Not Supported 00:23:49.743 Delete NVM Set: Not Supported 00:23:49.743 Extended LBA Formats Supported: Not Supported 00:23:49.743 Flexible Data Placement Supported: Not Supported 00:23:49.743 00:23:49.743 Controller Memory Buffer Support 00:23:49.743 ================================ 00:23:49.743 Supported: No 00:23:49.743 00:23:49.743 Persistent Memory Region Support 00:23:49.743 ================================ 00:23:49.743 Supported: No 00:23:49.743 00:23:49.743 Admin Command Set Attributes 00:23:49.743 ============================ 00:23:49.743 Security Send/Receive: Not Supported 00:23:49.743 Format NVM: Not Supported 00:23:49.743 Firmware Activate/Download: Not Supported 00:23:49.743 Namespace Management: Not Supported 00:23:49.743 Device Self-Test: Not Supported 00:23:49.743 Directives: Not Supported 00:23:49.743 NVMe-MI: Not Supported 00:23:49.743 Virtualization Management: Not Supported 00:23:49.743 Doorbell Buffer Config: Not Supported 00:23:49.743 Get LBA Status Capability: Not Supported 00:23:49.743 Command & Feature Lockdown Capability: Not Supported 00:23:49.743 Abort Command Limit: 1 00:23:49.743 Async Event Request Limit: 1 00:23:49.743 Number of Firmware Slots: N/A 00:23:49.743 Firmware Slot 1 Read-Only: N/A 00:23:49.743 Firmware Activation Without Reset: N/A 00:23:49.743 Multiple Update Detection Support: N/A 00:23:49.743 Firmware Update Granularity: No Information Provided 00:23:49.743 Per-Namespace SMART Log: No 00:23:49.743 Asymmetric Namespace Access Log Page: Not Supported 00:23:49.743 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:49.743 Command Effects Log Page: Not Supported 00:23:49.743 Get Log Page Extended Data: Supported 00:23:49.743 Telemetry Log Pages: Not Supported 00:23:49.744 Persistent Event Log Pages: Not Supported 00:23:49.744 Supported Log Pages Log Page: May Support 00:23:49.744 Commands Supported & Effects Log Page: Not Supported 00:23:49.744 Feature Identifiers & Effects Log Page:May Support 00:23:49.744 NVMe-MI Commands & Effects Log Page: May Support 00:23:49.744 Data Area 4 for Telemetry Log: Not Supported 00:23:49.744 Error Log Page Entries Supported: 1 00:23:49.744 Keep Alive: Not Supported 00:23:49.744 00:23:49.744 NVM Command Set Attributes 00:23:49.744 ========================== 00:23:49.744 Submission Queue Entry Size 00:23:49.744 Max: 1 00:23:49.744 Min: 1 00:23:49.744 Completion Queue Entry Size 00:23:49.744 Max: 1 00:23:49.744 Min: 1 00:23:49.744 Number of Namespaces: 0 00:23:49.744 Compare Command: Not Supported 00:23:49.744 Write Uncorrectable Command: Not Supported 00:23:49.744 Dataset Management Command: Not Supported 00:23:49.744 Write Zeroes Command: Not Supported 00:23:49.744 Set Features Save Field: Not Supported 00:23:49.744 Reservations: Not Supported 00:23:49.744 Timestamp: Not Supported 00:23:49.744 Copy: Not Supported 00:23:49.744 Volatile Write Cache: Not Present 00:23:49.744 Atomic Write Unit (Normal): 1 00:23:49.744 Atomic Write Unit (PFail): 1 00:23:49.744 Atomic Compare & Write Unit: 1 00:23:49.744 Fused Compare & Write: Not Supported 00:23:49.744 Scatter-Gather List 00:23:49.744 SGL Command Set: Supported 00:23:49.744 SGL Keyed: Not Supported 00:23:49.744 SGL Bit Bucket Descriptor: Not Supported 00:23:49.744 SGL Metadata Pointer: Not Supported 00:23:49.744 Oversized SGL: Not Supported 00:23:49.744 SGL Metadata Address: Not Supported 00:23:49.744 SGL Offset: Supported 00:23:49.744 Transport SGL Data Block: Not Supported 00:23:49.744 Replay Protected Memory Block: Not Supported 00:23:49.744 00:23:49.744 Firmware Slot Information 00:23:49.744 ========================= 00:23:49.744 Active slot: 0 00:23:49.744 00:23:49.744 00:23:49.744 Error Log 00:23:49.744 ========= 00:23:49.744 00:23:49.744 Active Namespaces 00:23:49.744 ================= 00:23:49.744 Discovery Log Page 00:23:49.744 ================== 00:23:49.744 Generation Counter: 2 00:23:49.744 Number of Records: 2 00:23:49.744 Record Format: 0 00:23:49.744 00:23:49.744 Discovery Log Entry 0 00:23:49.744 ---------------------- 00:23:49.744 Transport Type: 3 (TCP) 00:23:49.744 Address Family: 1 (IPv4) 00:23:49.744 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:49.744 Entry Flags: 00:23:49.744 Duplicate Returned Information: 0 00:23:49.744 Explicit Persistent Connection Support for Discovery: 0 00:23:49.744 Transport Requirements: 00:23:49.744 Secure Channel: Not Specified 00:23:49.744 Port ID: 1 (0x0001) 00:23:49.744 Controller ID: 65535 (0xffff) 00:23:49.744 Admin Max SQ Size: 32 00:23:49.744 Transport Service Identifier: 4420 00:23:49.744 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:49.744 Transport Address: 10.0.0.1 00:23:49.744 Discovery Log Entry 1 00:23:49.744 ---------------------- 00:23:49.744 Transport Type: 3 (TCP) 00:23:49.744 Address Family: 1 (IPv4) 00:23:49.744 Subsystem Type: 2 (NVM Subsystem) 00:23:49.744 Entry Flags: 00:23:49.744 Duplicate Returned Information: 0 00:23:49.744 Explicit Persistent Connection Support for Discovery: 0 00:23:49.744 Transport Requirements: 00:23:49.744 Secure Channel: Not Specified 00:23:49.744 Port ID: 1 (0x0001) 00:23:49.744 Controller ID: 65535 (0xffff) 00:23:49.744 Admin Max SQ Size: 32 00:23:49.744 Transport Service Identifier: 4420 00:23:49.744 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:49.744 Transport Address: 10.0.0.1 00:23:49.744 19:53:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:49.744 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.744 get_feature(0x01) failed 00:23:49.744 get_feature(0x02) failed 00:23:49.744 get_feature(0x04) failed 00:23:49.744 ===================================================== 00:23:49.744 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:49.744 ===================================================== 00:23:49.744 Controller Capabilities/Features 00:23:49.744 ================================ 00:23:49.744 Vendor ID: 0000 00:23:49.744 Subsystem Vendor ID: 0000 00:23:49.744 Serial Number: e7a86a56237d25d79d23 00:23:49.744 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:49.744 Firmware Version: 6.7.0-68 00:23:49.744 Recommended Arb Burst: 6 00:23:49.744 IEEE OUI Identifier: 00 00 00 00:23:49.744 Multi-path I/O 00:23:49.744 May have multiple subsystem ports: Yes 00:23:49.744 May have multiple controllers: Yes 00:23:49.744 Associated with SR-IOV VF: No 00:23:49.744 Max Data Transfer Size: Unlimited 00:23:49.744 Max Number of Namespaces: 1024 00:23:49.744 Max Number of I/O Queues: 128 00:23:49.744 NVMe Specification Version (VS): 1.3 00:23:49.744 NVMe Specification Version (Identify): 1.3 00:23:49.744 Maximum Queue Entries: 1024 00:23:49.744 Contiguous Queues Required: No 00:23:49.744 Arbitration Mechanisms Supported 00:23:49.744 Weighted Round Robin: Not Supported 00:23:49.744 Vendor Specific: Not Supported 00:23:49.744 Reset Timeout: 7500 ms 00:23:49.744 Doorbell Stride: 4 bytes 00:23:49.744 NVM Subsystem Reset: Not Supported 00:23:49.744 Command Sets Supported 00:23:49.744 NVM Command Set: Supported 00:23:49.744 Boot Partition: Not Supported 00:23:49.744 Memory Page Size Minimum: 4096 bytes 00:23:49.744 Memory Page Size Maximum: 4096 bytes 00:23:49.744 Persistent Memory Region: Not Supported 00:23:49.744 Optional Asynchronous Events Supported 00:23:49.744 Namespace Attribute Notices: Supported 00:23:49.744 Firmware Activation Notices: Not Supported 00:23:49.744 ANA Change Notices: Supported 00:23:49.744 PLE Aggregate Log Change Notices: Not Supported 00:23:49.744 LBA Status Info Alert Notices: Not Supported 00:23:49.744 EGE Aggregate Log Change Notices: Not Supported 00:23:49.744 Normal NVM Subsystem Shutdown event: Not Supported 00:23:49.744 Zone Descriptor Change Notices: Not Supported 00:23:49.745 Discovery Log Change Notices: Not Supported 00:23:49.745 Controller Attributes 00:23:49.745 128-bit Host Identifier: Supported 00:23:49.745 Non-Operational Permissive Mode: Not Supported 00:23:49.745 NVM Sets: Not Supported 00:23:49.745 Read Recovery Levels: Not Supported 00:23:49.745 Endurance Groups: Not Supported 00:23:49.745 Predictable Latency Mode: Not Supported 00:23:49.745 Traffic Based Keep ALive: Supported 00:23:49.745 Namespace Granularity: Not Supported 00:23:49.745 SQ Associations: Not Supported 00:23:49.745 UUID List: Not Supported 00:23:49.745 Multi-Domain Subsystem: Not Supported 00:23:49.745 Fixed Capacity Management: Not Supported 00:23:49.745 Variable Capacity Management: Not Supported 00:23:49.745 Delete Endurance Group: Not Supported 00:23:49.745 Delete NVM Set: Not Supported 00:23:49.745 Extended LBA Formats Supported: Not Supported 00:23:49.745 Flexible Data Placement Supported: Not Supported 00:23:49.745 00:23:49.745 Controller Memory Buffer Support 00:23:49.745 ================================ 00:23:49.745 Supported: No 00:23:49.745 00:23:49.745 Persistent Memory Region Support 00:23:49.745 ================================ 00:23:49.745 Supported: No 00:23:49.745 00:23:49.745 Admin Command Set Attributes 00:23:49.745 ============================ 00:23:49.745 Security Send/Receive: Not Supported 00:23:49.745 Format NVM: Not Supported 00:23:49.745 Firmware Activate/Download: Not Supported 00:23:49.745 Namespace Management: Not Supported 00:23:49.745 Device Self-Test: Not Supported 00:23:49.745 Directives: Not Supported 00:23:49.745 NVMe-MI: Not Supported 00:23:49.745 Virtualization Management: Not Supported 00:23:49.745 Doorbell Buffer Config: Not Supported 00:23:49.745 Get LBA Status Capability: Not Supported 00:23:49.745 Command & Feature Lockdown Capability: Not Supported 00:23:49.745 Abort Command Limit: 4 00:23:49.745 Async Event Request Limit: 4 00:23:49.745 Number of Firmware Slots: N/A 00:23:49.745 Firmware Slot 1 Read-Only: N/A 00:23:49.745 Firmware Activation Without Reset: N/A 00:23:49.745 Multiple Update Detection Support: N/A 00:23:49.745 Firmware Update Granularity: No Information Provided 00:23:49.745 Per-Namespace SMART Log: Yes 00:23:49.745 Asymmetric Namespace Access Log Page: Supported 00:23:49.745 ANA Transition Time : 10 sec 00:23:49.745 00:23:49.745 Asymmetric Namespace Access Capabilities 00:23:49.745 ANA Optimized State : Supported 00:23:49.745 ANA Non-Optimized State : Supported 00:23:49.745 ANA Inaccessible State : Supported 00:23:49.745 ANA Persistent Loss State : Supported 00:23:49.745 ANA Change State : Supported 00:23:49.745 ANAGRPID is not changed : No 00:23:49.745 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:49.745 00:23:49.745 ANA Group Identifier Maximum : 128 00:23:49.745 Number of ANA Group Identifiers : 128 00:23:49.745 Max Number of Allowed Namespaces : 1024 00:23:49.745 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:49.745 Command Effects Log Page: Supported 00:23:49.745 Get Log Page Extended Data: Supported 00:23:49.745 Telemetry Log Pages: Not Supported 00:23:49.745 Persistent Event Log Pages: Not Supported 00:23:49.745 Supported Log Pages Log Page: May Support 00:23:49.745 Commands Supported & Effects Log Page: Not Supported 00:23:49.745 Feature Identifiers & Effects Log Page:May Support 00:23:49.745 NVMe-MI Commands & Effects Log Page: May Support 00:23:49.745 Data Area 4 for Telemetry Log: Not Supported 00:23:49.745 Error Log Page Entries Supported: 128 00:23:49.745 Keep Alive: Supported 00:23:49.745 Keep Alive Granularity: 1000 ms 00:23:49.745 00:23:49.745 NVM Command Set Attributes 00:23:49.745 ========================== 00:23:49.745 Submission Queue Entry Size 00:23:49.745 Max: 64 00:23:49.745 Min: 64 00:23:49.745 Completion Queue Entry Size 00:23:49.745 Max: 16 00:23:49.745 Min: 16 00:23:49.745 Number of Namespaces: 1024 00:23:49.745 Compare Command: Not Supported 00:23:49.745 Write Uncorrectable Command: Not Supported 00:23:49.745 Dataset Management Command: Supported 00:23:49.745 Write Zeroes Command: Supported 00:23:49.745 Set Features Save Field: Not Supported 00:23:49.745 Reservations: Not Supported 00:23:49.745 Timestamp: Not Supported 00:23:49.745 Copy: Not Supported 00:23:49.745 Volatile Write Cache: Present 00:23:49.745 Atomic Write Unit (Normal): 1 00:23:49.745 Atomic Write Unit (PFail): 1 00:23:49.745 Atomic Compare & Write Unit: 1 00:23:49.745 Fused Compare & Write: Not Supported 00:23:49.745 Scatter-Gather List 00:23:49.745 SGL Command Set: Supported 00:23:49.745 SGL Keyed: Not Supported 00:23:49.745 SGL Bit Bucket Descriptor: Not Supported 00:23:49.745 SGL Metadata Pointer: Not Supported 00:23:49.745 Oversized SGL: Not Supported 00:23:49.745 SGL Metadata Address: Not Supported 00:23:49.745 SGL Offset: Supported 00:23:49.745 Transport SGL Data Block: Not Supported 00:23:49.745 Replay Protected Memory Block: Not Supported 00:23:49.745 00:23:49.745 Firmware Slot Information 00:23:49.745 ========================= 00:23:49.745 Active slot: 0 00:23:49.745 00:23:49.745 Asymmetric Namespace Access 00:23:49.745 =========================== 00:23:49.745 Change Count : 0 00:23:49.745 Number of ANA Group Descriptors : 1 00:23:49.745 ANA Group Descriptor : 0 00:23:49.745 ANA Group ID : 1 00:23:49.745 Number of NSID Values : 1 00:23:49.745 Change Count : 0 00:23:49.745 ANA State : 1 00:23:49.745 Namespace Identifier : 1 00:23:49.745 00:23:49.745 Commands Supported and Effects 00:23:49.745 ============================== 00:23:49.745 Admin Commands 00:23:49.745 -------------- 00:23:49.745 Get Log Page (02h): Supported 00:23:49.745 Identify (06h): Supported 00:23:49.745 Abort (08h): Supported 00:23:49.745 Set Features (09h): Supported 00:23:49.745 Get Features (0Ah): Supported 00:23:49.745 Asynchronous Event Request (0Ch): Supported 00:23:49.745 Keep Alive (18h): Supported 00:23:49.745 I/O Commands 00:23:49.745 ------------ 00:23:49.745 Flush (00h): Supported 00:23:49.745 Write (01h): Supported LBA-Change 00:23:49.745 Read (02h): Supported 00:23:49.745 Write Zeroes (08h): Supported LBA-Change 00:23:49.745 Dataset Management (09h): Supported 00:23:49.745 00:23:49.745 Error Log 00:23:49.745 ========= 00:23:49.745 Entry: 0 00:23:49.745 Error Count: 0x3 00:23:49.745 Submission Queue Id: 0x0 00:23:49.745 Command Id: 0x5 00:23:49.745 Phase Bit: 0 00:23:49.745 Status Code: 0x2 00:23:49.745 Status Code Type: 0x0 00:23:49.745 Do Not Retry: 1 00:23:49.745 Error Location: 0x28 00:23:49.745 LBA: 0x0 00:23:49.745 Namespace: 0x0 00:23:49.745 Vendor Log Page: 0x0 00:23:49.745 ----------- 00:23:49.745 Entry: 1 00:23:49.745 Error Count: 0x2 00:23:49.745 Submission Queue Id: 0x0 00:23:49.745 Command Id: 0x5 00:23:49.745 Phase Bit: 0 00:23:49.746 Status Code: 0x2 00:23:49.746 Status Code Type: 0x0 00:23:49.746 Do Not Retry: 1 00:23:49.746 Error Location: 0x28 00:23:49.746 LBA: 0x0 00:23:49.746 Namespace: 0x0 00:23:49.746 Vendor Log Page: 0x0 00:23:49.746 ----------- 00:23:49.746 Entry: 2 00:23:49.746 Error Count: 0x1 00:23:49.746 Submission Queue Id: 0x0 00:23:49.746 Command Id: 0x4 00:23:49.746 Phase Bit: 0 00:23:49.746 Status Code: 0x2 00:23:49.746 Status Code Type: 0x0 00:23:49.746 Do Not Retry: 1 00:23:49.746 Error Location: 0x28 00:23:49.746 LBA: 0x0 00:23:49.746 Namespace: 0x0 00:23:49.746 Vendor Log Page: 0x0 00:23:49.746 00:23:49.746 Number of Queues 00:23:49.746 ================ 00:23:49.746 Number of I/O Submission Queues: 128 00:23:49.746 Number of I/O Completion Queues: 128 00:23:49.746 00:23:49.746 ZNS Specific Controller Data 00:23:49.746 ============================ 00:23:49.746 Zone Append Size Limit: 0 00:23:49.746 00:23:49.746 00:23:49.746 Active Namespaces 00:23:49.746 ================= 00:23:49.746 get_feature(0x05) failed 00:23:49.746 Namespace ID:1 00:23:49.746 Command Set Identifier: NVM (00h) 00:23:49.746 Deallocate: Supported 00:23:49.746 Deallocated/Unwritten Error: Not Supported 00:23:49.746 Deallocated Read Value: Unknown 00:23:49.746 Deallocate in Write Zeroes: Not Supported 00:23:49.746 Deallocated Guard Field: 0xFFFF 00:23:49.746 Flush: Supported 00:23:49.746 Reservation: Not Supported 00:23:49.746 Namespace Sharing Capabilities: Multiple Controllers 00:23:49.746 Size (in LBAs): 1953525168 (931GiB) 00:23:49.746 Capacity (in LBAs): 1953525168 (931GiB) 00:23:49.746 Utilization (in LBAs): 1953525168 (931GiB) 00:23:49.746 UUID: aec6c38a-0560-4c31-b465-0184f6a86d39 00:23:49.746 Thin Provisioning: Not Supported 00:23:49.746 Per-NS Atomic Units: Yes 00:23:49.746 Atomic Boundary Size (Normal): 0 00:23:49.746 Atomic Boundary Size (PFail): 0 00:23:49.746 Atomic Boundary Offset: 0 00:23:49.746 NGUID/EUI64 Never Reused: No 00:23:49.746 ANA group ID: 1 00:23:49.746 Namespace Write Protected: No 00:23:49.746 Number of LBA Formats: 1 00:23:49.746 Current LBA Format: LBA Format #00 00:23:49.746 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:49.746 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # nvmfcleanup 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:49.746 rmmod nvme_tcp 00:23:49.746 rmmod nvme_fabrics 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # '[' -n '' ']' 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@282 -- # remove_spdk_ns 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.746 19:53:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.276 19:53:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:23:52.276 19:53:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:52.276 19:53:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:52.276 19:53:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # echo 0 00:23:52.276 19:53:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:52.276 19:53:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:52.276 19:53:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@694 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:52.276 19:53:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:52.276 19:53:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # modules=(/sys/module/nvmet/holders/*) 00:23:52.276 19:53:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # modprobe -r nvmet_tcp nvmet 00:23:52.276 19:53:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:53.210 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:53.210 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:53.210 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:53.210 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:53.210 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:53.210 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:53.210 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:53.210 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:53.210 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:53.210 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:53.210 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:53.210 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:53.210 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:53.210 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:53.210 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:53.210 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:54.145 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:54.403 00:23:54.403 real 0m9.481s 00:23:54.403 user 0m2.013s 00:23:54.403 sys 0m3.414s 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # xtrace_disable 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.403 ************************************ 00:23:54.403 END TEST nvmf_identify_kernel_target 00:23:54.403 ************************************ 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.403 ************************************ 00:23:54.403 START TEST nvmf_auth_host 00:23:54.403 ************************************ 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:54.403 * Looking for test storage... 00:23:54.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:54.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:54.403 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # prepare_net_devs 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # local -g is_hw=no 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # remove_spdk_ns 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # xtrace_disable 00:23:54.404 19:53:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # pci_devs=() 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -a pci_devs 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # pci_net_devs=() 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # pci_drivers=() 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -A pci_drivers 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # net_devs=() 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # local -ga net_devs 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # e810=() 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # local -ga e810 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # x722=() 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # local -ga x722 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # mlx=() 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # local -ga mlx 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:56.300 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:56.300 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # [[ up == up ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:56.300 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # [[ up == up ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:56.300 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # is_hw=yes 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.300 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:23:56.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:23:56.558 00:23:56.558 --- 10.0.0.2 ping statistics --- 00:23:56.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.558 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:23:56.558 00:23:56.558 --- 10.0.0.1 ping statistics --- 00:23:56.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.558 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@426 -- # return 0 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@725 -- # xtrace_disable 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@485 -- # nvmfpid=1265233 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # waitforlisten 1265233 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@832 -- # '[' -z 1265233 ']' 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local max_retries=100 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@841 -- # xtrace_disable 00:23:56.558 19:53:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@865 -- # return 0 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@731 -- # xtrace_disable 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=null 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # len=32 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # key=bce942a8f74a269628b0b6f0283be9d2 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-null.XXX 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-null.i7f 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key bce942a8f74a269628b0b6f0283be9d2 0 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 bce942a8f74a269628b0b6f0283be9d2 0 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # key=bce942a8f74a269628b0b6f0283be9d2 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=0 00:23:56.816 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-null.i7f 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-null.i7f 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.i7f 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha512 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # len=64 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # key=98911b6d342f0505c3d9957b976849dec4284afdc70baf890045824ee8fa865e 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha512.XXX 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha512.qlU 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key 98911b6d342f0505c3d9957b976849dec4284afdc70baf890045824ee8fa865e 3 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 98911b6d342f0505c3d9957b976849dec4284afdc70baf890045824ee8fa865e 3 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # key=98911b6d342f0505c3d9957b976849dec4284afdc70baf890045824ee8fa865e 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=3 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha512.qlU 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha512.qlU 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.qlU 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=null 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # len=48 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # key=d5350f2c82819e3639a1478c2e3af3f2ce084386599caf49 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-null.XXX 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-null.n4t 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key d5350f2c82819e3639a1478c2e3af3f2ce084386599caf49 0 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 d5350f2c82819e3639a1478c2e3af3f2ce084386599caf49 0 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # key=d5350f2c82819e3639a1478c2e3af3f2ce084386599caf49 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=0 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-null.n4t 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-null.n4t 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.n4t 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha384 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # len=48 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # key=1d7e5e09a6537245a79250ef7b9846a1e44a275eee363c79 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha384.XXX 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha384.tpV 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key 1d7e5e09a6537245a79250ef7b9846a1e44a275eee363c79 2 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 1d7e5e09a6537245a79250ef7b9846a1e44a275eee363c79 2 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # key=1d7e5e09a6537245a79250ef7b9846a1e44a275eee363c79 00:23:57.074 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=2 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha384.tpV 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha384.tpV 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.tpV 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha256 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # len=32 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # key=a9784a688c4bd94c51ff9b3f910bda36 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha256.XXX 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha256.XNW 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key a9784a688c4bd94c51ff9b3f910bda36 1 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 a9784a688c4bd94c51ff9b3f910bda36 1 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # key=a9784a688c4bd94c51ff9b3f910bda36 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=1 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha256.XNW 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha256.XNW 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.XNW 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha256 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # len=32 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # key=c118a061b00679ce4d7a0451ef6528ed 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha256.XXX 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha256.ed9 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key c118a061b00679ce4d7a0451ef6528ed 1 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 c118a061b00679ce4d7a0451ef6528ed 1 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # key=c118a061b00679ce4d7a0451ef6528ed 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=1 00:23:57.075 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:23:57.332 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha256.ed9 00:23:57.332 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha256.ed9 00:23:57.332 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ed9 00:23:57.332 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:57.332 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:23:57.332 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:57.332 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:23:57.332 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha384 00:23:57.332 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # len=48 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # key=8d73208a22b6c91facd54519cad310bf12af152aec555dc8 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha384.XXX 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha384.WRU 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key 8d73208a22b6c91facd54519cad310bf12af152aec555dc8 2 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 8d73208a22b6c91facd54519cad310bf12af152aec555dc8 2 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # key=8d73208a22b6c91facd54519cad310bf12af152aec555dc8 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=2 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha384.WRU 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha384.WRU 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.WRU 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=null 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # len=32 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # key=fbca677167d40b1c66ca5ed5ad57acf3 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-null.XXX 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-null.UBe 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key fbca677167d40b1c66ca5ed5ad57acf3 0 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 fbca677167d40b1c66ca5ed5ad57acf3 0 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # key=fbca677167d40b1c66ca5ed5ad57acf3 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=0 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-null.UBe 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-null.UBe 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.UBe 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # local digest len file key 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # local -A digests 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # digest=sha512 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # len=64 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@731 -- # key=ecf58f2c1d4f6664f2f886ee428828d21ae7ede7967422e24227aa83544cba51 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # mktemp -t spdk.key-sha512.XXX 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # file=/tmp/spdk.key-sha512.KcP 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # format_dhchap_key ecf58f2c1d4f6664f2f886ee428828d21ae7ede7967422e24227aa83544cba51 3 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # format_key DHHC-1 ecf58f2c1d4f6664f2f886ee428828d21ae7ede7967422e24227aa83544cba51 3 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@706 -- # local prefix key digest 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # prefix=DHHC-1 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # key=ecf58f2c1d4f6664f2f886ee428828d21ae7ede7967422e24227aa83544cba51 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # digest=3 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@709 -- # python - 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@734 -- # chmod 0600 /tmp/spdk.key-sha512.KcP 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@736 -- # echo /tmp/spdk.key-sha512.KcP 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.KcP 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1265233 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@832 -- # '[' -z 1265233 ']' 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local max_retries=100 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@841 -- # xtrace_disable 00:23:57.333 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@865 -- # return 0 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.i7f 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.qlU ]] 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.qlU 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.n4t 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.tpV ]] 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tpV 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.XNW 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ed9 ]] 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ed9 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:57.591 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.WRU 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.UBe ]] 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.UBe 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.KcP 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@638 -- # nvmet=/sys/kernel/config/nvmet 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@640 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@643 -- # local block nvme 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ ! -e /sys/module/nvmet ]] 00:23:57.592 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@646 -- # modprobe nvmet 00:23:57.849 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@649 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:57.849 19:53:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:58.781 Waiting for block devices as requested 00:23:58.781 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:59.039 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:59.039 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:59.297 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:59.297 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:59.297 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:59.297 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:59.554 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:59.554 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:59.554 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:59.812 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:59.812 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:59.812 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:59.812 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:00.069 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:00.069 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:00.069 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # is_block_zoned nvme0n1 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1663 -- # local device=nvme0n1 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1666 -- # [[ none != none ]] 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@657 -- # block_in_use nvme0n1 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:00.682 No valid GPT data, bailing 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@657 -- # nvme=/dev/nvme0n1 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # [[ -b /dev/nvme0n1 ]] 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 1 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo /dev/nvme0n1 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 1 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # echo 10.0.0.1 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # echo tcp 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # echo 4420 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # echo ipv4 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:00.682 00:24:00.682 Discovery Log Number of Records 2, Generation counter 2 00:24:00.682 =====Discovery Log Entry 0====== 00:24:00.682 trtype: tcp 00:24:00.682 adrfam: ipv4 00:24:00.682 subtype: current discovery subsystem 00:24:00.682 treq: not specified, sq flow control disable supported 00:24:00.682 portid: 1 00:24:00.682 trsvcid: 4420 00:24:00.682 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:00.682 traddr: 10.0.0.1 00:24:00.682 eflags: none 00:24:00.682 sectype: none 00:24:00.682 =====Discovery Log Entry 1====== 00:24:00.682 trtype: tcp 00:24:00.682 adrfam: ipv4 00:24:00.682 subtype: nvme subsystem 00:24:00.682 treq: not specified, sq flow control disable supported 00:24:00.682 portid: 1 00:24:00.682 trsvcid: 4420 00:24:00.682 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:00.682 traddr: 10.0.0.1 00:24:00.682 eflags: none 00:24:00.682 sectype: none 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:00.682 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:00.683 19:53:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.941 nvme0n1 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: ]] 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:00.941 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.199 nvme0n1 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.199 nvme0n1 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.199 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: ]] 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:01.457 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:01.458 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.458 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.458 nvme0n1 00:24:01.458 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.458 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.458 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.458 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.458 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.458 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.458 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.458 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.458 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.458 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: ]] 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.716 19:53:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.716 nvme0n1 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:01.716 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.717 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.975 nvme0n1 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: ]] 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:01.975 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:01.976 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:01.976 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.976 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:01.976 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.233 nvme0n1 00:24:02.233 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:02.233 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.233 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:02.233 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.233 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.233 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:02.233 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:02.234 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.492 nvme0n1 00:24:02.492 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:02.492 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.492 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.492 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:02.492 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.492 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:02.492 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.492 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: ]] 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:02.493 19:53:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.751 nvme0n1 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: ]] 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:02.751 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:02.752 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.020 nvme0n1 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:03.020 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.279 nvme0n1 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: ]] 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:03.279 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.846 nvme0n1 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:03.846 19:53:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.104 nvme0n1 00:24:04.104 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:04.104 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.104 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:04.104 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.104 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.104 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:04.104 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.104 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.104 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:04.104 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.104 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:04.104 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.104 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: ]] 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:04.105 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.363 nvme0n1 00:24:04.363 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:04.363 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.363 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:04.363 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.363 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.363 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:04.363 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.363 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.363 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:04.363 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.363 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:04.363 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.363 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:04.363 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.363 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: ]] 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:04.364 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.621 nvme0n1 00:24:04.621 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:04.621 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.621 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:04.621 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.621 19:53:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:04.879 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:04.880 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.138 nvme0n1 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:05.138 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: ]] 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:05.139 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.704 nvme0n1 00:24:05.704 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:05.704 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.704 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:05.704 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.704 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.704 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:05.704 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.704 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.704 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:05.704 19:53:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:05.704 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.269 nvme0n1 00:24:06.269 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:06.269 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.269 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.269 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:06.269 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.269 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:06.269 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.269 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.269 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:06.269 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.269 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: ]] 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:06.270 19:53:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.835 nvme0n1 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: ]] 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:06.835 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.401 nvme0n1 00:24:07.401 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:07.401 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.401 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:07.401 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.401 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.401 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:07.401 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.401 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.401 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:07.401 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:07.659 19:53:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.225 nvme0n1 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: ]] 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:08.225 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:08.226 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:08.226 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:08.226 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.226 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.226 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:08.226 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.226 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:08.226 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:08.226 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:08.226 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:08.226 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:08.226 19:53:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.158 nvme0n1 00:24:09.158 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:09.159 19:53:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.091 nvme0n1 00:24:10.091 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:10.091 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.091 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.091 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:10.091 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.091 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:10.091 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.091 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: ]] 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:10.092 19:53:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.465 nvme0n1 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: ]] 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:11.465 19:53:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.398 nvme0n1 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:12.398 19:53:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.332 nvme0n1 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: ]] 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:13.332 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.590 nvme0n1 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:13.591 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.849 nvme0n1 00:24:13.849 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:13.849 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.849 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:13.849 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.849 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.849 19:53:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: ]] 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.849 nvme0n1 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:13.849 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: ]] 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.107 nvme0n1 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.107 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.367 nvme0n1 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: ]] 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.367 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.626 nvme0n1 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.626 19:53:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.626 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.885 nvme0n1 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: ]] 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.885 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:14.886 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:14.886 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.886 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:14.886 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:14.886 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.170 nvme0n1 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: ]] 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:15.170 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:15.171 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.171 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:15.171 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:15.171 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.431 nvme0n1 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.431 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:15.432 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.689 nvme0n1 00:24:15.689 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:15.689 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.689 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.689 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:15.689 19:53:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: ]] 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:15.689 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.690 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:15.690 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.690 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:15.690 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:15.690 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:15.690 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.690 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.690 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:15.690 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.690 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:15.690 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:15.690 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:15.947 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.947 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:15.947 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.206 nvme0n1 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.206 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.207 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:16.207 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.207 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:16.207 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:16.207 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:16.207 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.207 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:16.207 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.464 nvme0n1 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.464 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: ]] 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:16.465 19:53:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.722 nvme0n1 00:24:16.722 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:16.722 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.722 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:16.722 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.722 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.722 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: ]] 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.979 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:16.980 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.980 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:16.980 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:16.980 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:16.980 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:16.980 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:16.980 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.237 nvme0n1 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.237 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:17.238 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.495 nvme0n1 00:24:17.495 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:17.495 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.495 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.495 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:17.495 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.495 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:17.495 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.495 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.495 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:17.495 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.495 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:17.495 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:17.495 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.495 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:17.495 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: ]] 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:17.496 19:53:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.061 nvme0n1 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:18.061 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.626 nvme0n1 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: ]] 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:18.626 19:53:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.190 nvme0n1 00:24:19.190 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:19.190 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.190 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.190 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:19.190 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.190 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: ]] 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:19.448 19:53:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.015 nvme0n1 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:20.015 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.581 nvme0n1 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: ]] 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:20.581 19:53:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.514 nvme0n1 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:21.514 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:21.515 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.515 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.515 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:21.515 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.515 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:21.515 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:21.515 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:21.515 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.515 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:21.515 19:53:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.447 nvme0n1 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: ]] 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:22.447 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.705 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:22.705 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.705 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:22.705 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:22.705 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:22.705 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.705 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.705 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:22.705 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.705 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:22.705 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:22.705 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:22.705 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.705 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:22.705 19:53:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.638 nvme0n1 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: ]] 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:23.638 19:53:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.571 nvme0n1 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:24.571 19:53:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.507 nvme0n1 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: ]] 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:25.507 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.767 nvme0n1 00:24:25.767 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:25.767 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.767 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:25.767 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.767 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.767 19:53:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:25.767 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.767 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.767 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:25.767 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.767 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:25.767 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.767 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:25.767 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.767 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:25.768 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.028 nvme0n1 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: ]] 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:26.028 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.029 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.287 nvme0n1 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: ]] 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.287 nvme0n1 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.287 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.545 nvme0n1 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.545 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: ]] 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.803 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:26.804 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:26.804 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:26.804 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.804 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.804 19:53:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.804 nvme0n1 00:24:26.804 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:26.804 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.804 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:26.804 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.804 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.804 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.061 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.062 nvme0n1 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.062 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: ]] 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.320 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.321 nvme0n1 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.321 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: ]] 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.579 nvme0n1 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.579 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.837 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:27.838 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.838 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:27.838 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:27.838 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:27.838 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:27.838 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.838 19:53:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.838 nvme0n1 00:24:27.838 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:27.838 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.838 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:27.838 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.838 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.838 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:28.097 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.097 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.097 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:28.097 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.097 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:28.097 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:28.097 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: ]] 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:28.098 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.358 nvme0n1 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.358 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:28.359 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.619 nvme0n1 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: ]] 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:28.619 19:53:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.878 nvme0n1 00:24:28.878 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:28.878 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.878 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.878 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:28.878 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: ]] 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:29.138 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.398 nvme0n1 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:29.398 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.679 nvme0n1 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: ]] 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:29.679 19:53:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.679 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:29.679 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.679 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:29.679 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:29.679 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:29.679 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.679 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.679 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:29.679 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.679 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:29.679 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:29.679 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:29.679 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:29.679 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:29.679 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.259 nvme0n1 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:30.259 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:30.260 19:53:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.828 nvme0n1 00:24:30.828 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:30.828 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.828 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:30.828 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.828 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.829 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: ]] 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:31.087 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.655 nvme0n1 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: ]] 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:31.655 19:53:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.222 nvme0n1 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:32.222 19:53:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.793 nvme0n1 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmNlOTQyYThmNzRhMjY5NjI4YjBiNmYwMjgzYmU5ZDK54oyL: 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: ]] 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:OTg5MTFiNmQzNDJmMDUwNWMzZDk5NTdiOTc2ODQ5ZGVjNDI4NGFmZGM3MGJhZjg5MDA0NTgyNGVlOGZhODY1ZXk+mvY=: 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:32.793 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.794 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:32.794 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.794 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:32.794 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:32.794 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:32.794 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.794 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.794 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:32.794 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.794 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:32.794 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:32.794 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:32.794 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.794 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:32.794 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.730 nvme0n1 00:24:33.730 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:33.730 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.730 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:33.730 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.730 19:53:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:33.730 19:53:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.665 nvme0n1 00:24:34.922 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:34.922 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.922 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.922 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:34.922 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.922 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:34.922 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.922 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.922 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTk3ODRhNjg4YzRiZDk0YzUxZmY5YjNmOTEwYmRhMzbmblre: 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: ]] 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzExOGEwNjFiMDA2NzljZTRkN2EwNDUxZWY2NTI4ZWSR/MmV: 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:34.923 19:53:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.860 nvme0n1 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGQ3MzIwOGEyMmI2YzkxZmFjZDU0NTE5Y2FkMzEwYmYxMmFmMTUyYWVjNTU1ZGM4NPvZxQ==: 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: ]] 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZmJjYTY3NzE2N2Q0MGIxYzY2Y2E1ZWQ1YWQ1N2FjZjPdHtgO: 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:35.860 19:53:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.795 nvme0n1 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWNmNThmMmMxZDRmNjY2NGYyZjg4NmVlNDI4ODI4ZDIxYWU3ZWRlNzk2NzQyMmUyNDIyN2FhODM1NDRjYmE1McTcL+4=: 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:36.796 19:53:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.732 nvme0n1 00:24:37.732 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:37.732 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.732 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:37.732 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.732 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.732 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDUzNTBmMmM4MjgxOWUzNjM5YTE0NzhjMmUzYWYzZjJjZTA4NDM4NjU5OWNhZjQ5oY3r1w==: 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: ]] 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ3ZTVlMDlhNjUzNzI0NWE3OTI1MGVmN2I5ODQ2YTFlNDRhMjc1ZWVlMzYzYzc5xgUxMA==: 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:37.990 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # local es=0 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@639 -- # local arg=rpc_cmd 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@643 -- # type -t rpc_cmd 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.991 request: 00:24:37.991 { 00:24:37.991 "name": "nvme0", 00:24:37.991 "trtype": "tcp", 00:24:37.991 "traddr": "10.0.0.1", 00:24:37.991 "adrfam": "ipv4", 00:24:37.991 "trsvcid": "4420", 00:24:37.991 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:37.991 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:37.991 "prchk_reftag": false, 00:24:37.991 "prchk_guard": false, 00:24:37.991 "hdgst": false, 00:24:37.991 "ddgst": false, 00:24:37.991 "method": "bdev_nvme_attach_controller", 00:24:37.991 "req_id": 1 00:24:37.991 } 00:24:37.991 Got JSON-RPC error response 00:24:37.991 response: 00:24:37.991 { 00:24:37.991 "code": -5, 00:24:37.991 "message": "Input/output error" 00:24:37.991 } 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # es=1 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # local es=0 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@639 -- # local arg=rpc_cmd 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@643 -- # type -t rpc_cmd 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.991 request: 00:24:37.991 { 00:24:37.991 "name": "nvme0", 00:24:37.991 "trtype": "tcp", 00:24:37.991 "traddr": "10.0.0.1", 00:24:37.991 "adrfam": "ipv4", 00:24:37.991 "trsvcid": "4420", 00:24:37.991 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:37.991 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:37.991 "prchk_reftag": false, 00:24:37.991 "prchk_guard": false, 00:24:37.991 "hdgst": false, 00:24:37.991 "ddgst": false, 00:24:37.991 "dhchap_key": "key2", 00:24:37.991 "method": "bdev_nvme_attach_controller", 00:24:37.991 "req_id": 1 00:24:37.991 } 00:24:37.991 Got JSON-RPC error response 00:24:37.991 response: 00:24:37.991 { 00:24:37.991 "code": -5, 00:24:37.991 "message": "Input/output error" 00:24:37.991 } 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # es=1 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.991 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # local ip 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # ip_candidates=() 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@746 -- # local -A ip_candidates 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # local es=0 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@639 -- # local arg=rpc_cmd 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@643 -- # type -t rpc_cmd 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.251 request: 00:24:38.251 { 00:24:38.251 "name": "nvme0", 00:24:38.251 "trtype": "tcp", 00:24:38.251 "traddr": "10.0.0.1", 00:24:38.251 "adrfam": "ipv4", 00:24:38.251 "trsvcid": "4420", 00:24:38.251 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:38.251 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:38.251 "prchk_reftag": false, 00:24:38.251 "prchk_guard": false, 00:24:38.251 "hdgst": false, 00:24:38.251 "ddgst": false, 00:24:38.251 "dhchap_key": "key1", 00:24:38.251 "dhchap_ctrlr_key": "ckey2", 00:24:38.251 "method": "bdev_nvme_attach_controller", 00:24:38.251 "req_id": 1 00:24:38.251 } 00:24:38.251 Got JSON-RPC error response 00:24:38.251 response: 00:24:38.251 { 00:24:38.251 "code": -5, 00:24:38.251 "message": "Input/output error" 00:24:38.251 } 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # es=1 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # nvmfcleanup 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:38.251 rmmod nvme_tcp 00:24:38.251 rmmod nvme_fabrics 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # '[' -n 1265233 ']' 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@494 -- # killprocess 1265233 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@951 -- # '[' -z 1265233 ']' 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # kill -0 1265233 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # uname 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1265233 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1265233' 00:24:38.251 killing process with pid 1265233 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # kill 1265233 00:24:38.251 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@975 -- # wait 1265233 00:24:38.510 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:24:38.510 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:24:38.510 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:24:38.510 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:38.510 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@282 -- # remove_spdk_ns 00:24:38.510 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.510 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:38.510 19:53:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.045 19:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:24:41.045 19:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:41.045 19:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:41.046 19:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:41.046 19:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:41.046 19:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # echo 0 00:24:41.046 19:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:41.046 19:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:41.046 19:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@694 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:41.046 19:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:41.046 19:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # modules=(/sys/module/nvmet/holders/*) 00:24:41.046 19:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # modprobe -r nvmet_tcp nvmet 00:24:41.046 19:53:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:41.982 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:41.982 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:41.982 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:41.982 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:41.982 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:41.982 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:41.982 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:41.982 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:41.982 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:41.982 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:41.982 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:41.982 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:41.982 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:41.982 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:41.982 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:41.982 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:42.920 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:42.920 19:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.i7f /tmp/spdk.key-null.n4t /tmp/spdk.key-sha256.XNW /tmp/spdk.key-sha384.WRU /tmp/spdk.key-sha512.KcP /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:42.920 19:54:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:44.294 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:44.294 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:44.294 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:44.294 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:44.294 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:44.294 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:44.294 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:44.294 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:44.294 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:44.294 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:44.294 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:44.294 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:44.294 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:44.294 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:44.294 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:44.294 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:44.294 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:44.294 00:24:44.294 real 0m49.865s 00:24:44.294 user 0m47.649s 00:24:44.294 sys 0m5.887s 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # xtrace_disable 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.294 ************************************ 00:24:44.294 END TEST nvmf_auth_host 00:24:44.294 ************************************ 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.294 ************************************ 00:24:44.294 START TEST nvmf_digest 00:24:44.294 ************************************ 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:44.294 * Looking for test storage... 00:24:44.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:44.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@452 -- # prepare_net_devs 00:24:44.294 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # local -g is_hw=no 00:24:44.295 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # remove_spdk_ns 00:24:44.295 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.295 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.295 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.295 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:24:44.295 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:24:44.295 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # xtrace_disable 00:24:44.295 19:54:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # pci_devs=() 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -a pci_devs 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # pci_net_devs=() 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # pci_drivers=() 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -A pci_drivers 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # net_devs=() 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # local -ga net_devs 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # e810=() 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # local -ga e810 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # x722=() 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # local -ga x722 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # mlx=() 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # local -ga mlx 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.198 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:46.199 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:46.199 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # [[ up == up ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:46.199 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # [[ up == up ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:46.199 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # is_hw=yes 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:24:46.199 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:24:46.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:24:46.456 00:24:46.456 --- 10.0.0.2 ping statistics --- 00:24:46.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.456 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:24:46.456 00:24:46.456 --- 10.0.0.1 ping statistics --- 00:24:46.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.456 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@426 -- # return 0 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1108 -- # xtrace_disable 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:46.456 ************************************ 00:24:46.456 START TEST nvmf_digest_clean 00:24:46.456 ************************************ 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # run_digest 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@725 -- # xtrace_disable 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@485 -- # nvmfpid=1274827 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@486 -- # waitforlisten 1274827 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # '[' -z 1274827 ']' 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local max_retries=100 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@841 -- # xtrace_disable 00:24:46.456 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.456 [2024-07-24 19:54:03.704322] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:24:46.456 [2024-07-24 19:54:03.704406] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.456 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.456 [2024-07-24 19:54:03.768417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.714 [2024-07-24 19:54:03.874065] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.714 [2024-07-24 19:54:03.874145] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.714 [2024-07-24 19:54:03.874174] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.714 [2024-07-24 19:54:03.874185] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.714 [2024-07-24 19:54:03.874194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.714 [2024-07-24 19:54:03.874227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.714 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:24:46.714 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@865 -- # return 0 00:24:46.714 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:24:46.714 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@731 -- # xtrace_disable 00:24:46.714 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.714 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.714 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:46.714 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:46.714 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:46.714 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@562 -- # xtrace_disable 00:24:46.714 19:54:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.714 null0 00:24:46.714 [2024-07-24 19:54:04.052595] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.714 [2024-07-24 19:54:04.076830] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1274847 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1274847 /var/tmp/bperf.sock 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # '[' -z 1274847 ']' 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local max_retries=100 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:46.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@841 -- # xtrace_disable 00:24:46.714 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:46.997 [2024-07-24 19:54:04.124314] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:24:46.997 [2024-07-24 19:54:04.124382] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1274847 ] 00:24:46.997 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.997 [2024-07-24 19:54:04.186137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.997 [2024-07-24 19:54:04.302747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.258 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:24:47.258 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@865 -- # return 0 00:24:47.258 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:47.258 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:47.258 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:47.516 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:47.516 19:54:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:47.774 nvme0n1 00:24:47.774 19:54:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:47.774 19:54:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:48.032 Running I/O for 2 seconds... 00:24:49.929 00:24:49.929 Latency(us) 00:24:49.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.929 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:49.929 nvme0n1 : 2.00 18094.42 70.68 0.00 0.00 7064.30 3616.62 16602.45 00:24:49.929 =================================================================================================================== 00:24:49.929 Total : 18094.42 70.68 0.00 0.00 7064.30 3616.62 16602.45 00:24:49.929 0 00:24:49.929 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:49.929 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:49.929 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:49.929 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:49.929 | select(.opcode=="crc32c") 00:24:49.929 | "\(.module_name) \(.executed)"' 00:24:49.929 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:50.186 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:50.187 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:50.187 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:50.187 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:50.187 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1274847 00:24:50.187 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' -z 1274847 ']' 00:24:50.187 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # kill -0 1274847 00:24:50.187 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # uname 00:24:50.187 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:24:50.187 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1274847 00:24:50.187 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:24:50.187 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:24:50.187 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1274847' 00:24:50.187 killing process with pid 1274847 00:24:50.187 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # kill 1274847 00:24:50.187 Received shutdown signal, test time was about 2.000000 seconds 00:24:50.187 00:24:50.187 Latency(us) 00:24:50.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.187 =================================================================================================================== 00:24:50.187 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:50.187 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@975 -- # wait 1274847 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1275493 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1275493 /var/tmp/bperf.sock 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # '[' -z 1275493 ']' 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local max_retries=100 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:50.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@841 -- # xtrace_disable 00:24:50.444 19:54:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:50.444 [2024-07-24 19:54:07.813731] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:24:50.444 [2024-07-24 19:54:07.813825] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275493 ] 00:24:50.444 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:50.444 Zero copy mechanism will not be used. 00:24:50.702 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.702 [2024-07-24 19:54:07.876566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.702 [2024-07-24 19:54:07.984736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.702 19:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:24:50.702 19:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@865 -- # return 0 00:24:50.702 19:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:50.702 19:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:50.702 19:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:51.267 19:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:51.267 19:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:51.524 nvme0n1 00:24:51.524 19:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:51.524 19:54:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:51.524 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:51.524 Zero copy mechanism will not be used. 00:24:51.524 Running I/O for 2 seconds... 00:24:54.049 00:24:54.049 Latency(us) 00:24:54.049 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.049 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:54.049 nvme0n1 : 2.00 4525.55 565.69 0.00 0.00 3530.60 655.36 12815.93 00:24:54.049 =================================================================================================================== 00:24:54.049 Total : 4525.55 565.69 0.00 0.00 3530.60 655.36 12815.93 00:24:54.049 0 00:24:54.049 19:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:54.049 19:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:54.049 19:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:54.049 19:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:54.049 | select(.opcode=="crc32c") 00:24:54.049 | "\(.module_name) \(.executed)"' 00:24:54.049 19:54:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:54.049 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:54.049 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:54.049 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:54.049 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1275493 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' -z 1275493 ']' 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # kill -0 1275493 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # uname 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1275493 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1275493' 00:24:54.050 killing process with pid 1275493 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # kill 1275493 00:24:54.050 Received shutdown signal, test time was about 2.000000 seconds 00:24:54.050 00:24:54.050 Latency(us) 00:24:54.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.050 =================================================================================================================== 00:24:54.050 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@975 -- # wait 1275493 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1276169 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1276169 /var/tmp/bperf.sock 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # '[' -z 1276169 ']' 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local max_retries=100 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:54.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@841 -- # xtrace_disable 00:24:54.050 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:54.050 [2024-07-24 19:54:11.422356] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:24:54.050 [2024-07-24 19:54:11.422442] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276169 ] 00:24:54.308 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.308 [2024-07-24 19:54:11.485515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.308 [2024-07-24 19:54:11.595863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.308 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:24:54.308 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@865 -- # return 0 00:24:54.308 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:54.308 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:54.308 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:54.875 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:54.875 19:54:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:55.132 nvme0n1 00:24:55.132 19:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:55.132 19:54:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:55.132 Running I/O for 2 seconds... 00:24:57.660 00:24:57.660 Latency(us) 00:24:57.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.660 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:57.660 nvme0n1 : 2.01 21134.56 82.56 0.00 0.00 6046.44 3131.16 16311.18 00:24:57.660 =================================================================================================================== 00:24:57.660 Total : 21134.56 82.56 0.00 0.00 6046.44 3131.16 16311.18 00:24:57.660 0 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:57.660 | select(.opcode=="crc32c") 00:24:57.660 | "\(.module_name) \(.executed)"' 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1276169 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' -z 1276169 ']' 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # kill -0 1276169 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # uname 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1276169 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1276169' 00:24:57.660 killing process with pid 1276169 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # kill 1276169 00:24:57.660 Received shutdown signal, test time was about 2.000000 seconds 00:24:57.660 00:24:57.660 Latency(us) 00:24:57.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.660 =================================================================================================================== 00:24:57.660 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.660 19:54:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@975 -- # wait 1276169 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1276693 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1276693 /var/tmp/bperf.sock 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # '[' -z 1276693 ']' 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local max_retries=100 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:57.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@841 -- # xtrace_disable 00:24:57.660 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:57.918 [2024-07-24 19:54:15.064192] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:24:57.918 [2024-07-24 19:54:15.064304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276693 ] 00:24:57.918 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:57.918 Zero copy mechanism will not be used. 00:24:57.918 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.918 [2024-07-24 19:54:15.123911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.918 [2024-07-24 19:54:15.232246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.918 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:24:57.918 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@865 -- # return 0 00:24:57.918 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:57.918 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:57.918 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:58.484 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:58.484 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:58.741 nvme0n1 00:24:58.741 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:58.741 19:54:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:58.741 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:58.741 Zero copy mechanism will not be used. 00:24:58.741 Running I/O for 2 seconds... 00:25:01.291 00:25:01.291 Latency(us) 00:25:01.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.291 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:01.291 nvme0n1 : 2.00 5173.61 646.70 0.00 0.00 3084.45 2148.12 13883.92 00:25:01.291 =================================================================================================================== 00:25:01.291 Total : 5173.61 646.70 0.00 0.00 3084.45 2148.12 13883.92 00:25:01.291 0 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:01.291 | select(.opcode=="crc32c") 00:25:01.291 | "\(.module_name) \(.executed)"' 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1276693 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' -z 1276693 ']' 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # kill -0 1276693 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # uname 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1276693 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1276693' 00:25:01.291 killing process with pid 1276693 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # kill 1276693 00:25:01.291 Received shutdown signal, test time was about 2.000000 seconds 00:25:01.291 00:25:01.291 Latency(us) 00:25:01.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.291 =================================================================================================================== 00:25:01.291 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@975 -- # wait 1276693 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1274827 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' -z 1274827 ']' 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # kill -0 1274827 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # uname 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:25:01.291 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1274827 00:25:01.551 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:25:01.551 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:25:01.551 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1274827' 00:25:01.551 killing process with pid 1274827 00:25:01.551 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # kill 1274827 00:25:01.551 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@975 -- # wait 1274827 00:25:01.809 00:25:01.809 real 0m15.299s 00:25:01.809 user 0m30.521s 00:25:01.809 sys 0m4.100s 00:25:01.809 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # xtrace_disable 00:25:01.809 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:01.809 ************************************ 00:25:01.809 END TEST nvmf_digest_clean 00:25:01.809 ************************************ 00:25:01.809 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:01.809 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:25:01.809 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1108 -- # xtrace_disable 00:25:01.809 19:54:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:01.809 ************************************ 00:25:01.809 START TEST nvmf_digest_error 00:25:01.809 ************************************ 00:25:01.810 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # run_digest_error 00:25:01.810 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:01.810 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:25:01.810 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@725 -- # xtrace_disable 00:25:01.810 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:01.810 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@485 -- # nvmfpid=1277136 00:25:01.810 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:01.810 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@486 -- # waitforlisten 1277136 00:25:01.810 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # '[' -z 1277136 ']' 00:25:01.810 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.810 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local max_retries=100 00:25:01.810 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.810 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@841 -- # xtrace_disable 00:25:01.810 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:01.810 [2024-07-24 19:54:19.061773] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:25:01.810 [2024-07-24 19:54:19.061884] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.810 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.810 [2024-07-24 19:54:19.126708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.068 [2024-07-24 19:54:19.232673] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.068 [2024-07-24 19:54:19.232724] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.068 [2024-07-24 19:54:19.232752] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.068 [2024-07-24 19:54:19.232763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.068 [2024-07-24 19:54:19.232773] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.068 [2024-07-24 19:54:19.232802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@865 -- # return 0 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@731 -- # xtrace_disable 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.068 [2024-07-24 19:54:19.289316] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.068 null0 00:25:02.068 [2024-07-24 19:54:19.400997] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.068 [2024-07-24 19:54:19.425249] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1277255 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1277255 /var/tmp/bperf.sock 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # '[' -z 1277255 ']' 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local max_retries=100 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:02.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@841 -- # xtrace_disable 00:25:02.068 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.329 [2024-07-24 19:54:19.479941] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:25:02.329 [2024-07-24 19:54:19.480027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277255 ] 00:25:02.329 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.329 [2024-07-24 19:54:19.541554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.329 [2024-07-24 19:54:19.656938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.587 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:25:02.587 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@865 -- # return 0 00:25:02.587 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:02.587 19:54:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:02.846 19:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:02.846 19:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:02.846 19:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:02.846 19:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:02.846 19:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:02.846 19:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:03.412 nvme0n1 00:25:03.412 19:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:03.412 19:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:03.412 19:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:03.412 19:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:03.412 19:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:03.412 19:54:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:03.412 Running I/O for 2 seconds... 00:25:03.412 [2024-07-24 19:54:20.683622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.412 [2024-07-24 19:54:20.683674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.412 [2024-07-24 19:54:20.683705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.412 [2024-07-24 19:54:20.701379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.412 [2024-07-24 19:54:20.701411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.412 [2024-07-24 19:54:20.701427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.412 [2024-07-24 19:54:20.712324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.412 [2024-07-24 19:54:20.712355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.412 [2024-07-24 19:54:20.712372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.412 [2024-07-24 19:54:20.728949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.412 [2024-07-24 19:54:20.728986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.412 [2024-07-24 19:54:20.729014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.412 [2024-07-24 19:54:20.745172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.412 [2024-07-24 19:54:20.745207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.412 [2024-07-24 19:54:20.745225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.412 [2024-07-24 19:54:20.758175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.412 [2024-07-24 19:54:20.758213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.412 [2024-07-24 19:54:20.758250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.412 [2024-07-24 19:54:20.774096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.412 [2024-07-24 19:54:20.774133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.412 [2024-07-24 19:54:20.774153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.412 [2024-07-24 19:54:20.787935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.412 [2024-07-24 19:54:20.787980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.412 [2024-07-24 19:54:20.787996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:20.801032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:20.801069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:20.801089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:20.813593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:20.813625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:20.813656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:20.827165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:20.827196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:20.827213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:20.842356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:20.842386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:20.842402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:20.854435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:20.854465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:20.854481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:20.870371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:20.870402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:20.870418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:20.883056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:20.883093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:20.883113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:20.896924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:20.896960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:20.896979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:20.909656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:20.909692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:20.909712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:20.923999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:20.924034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:20.924060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:20.936499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:20.936541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:20.936557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:20.954768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:20.954804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:20.954823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:20.966165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:20.966213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:20.966229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:20.981460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:20.981491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:20.981524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:20.998095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:20.998126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:20.998158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.671 [2024-07-24 19:54:21.010862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.671 [2024-07-24 19:54:21.010899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.671 [2024-07-24 19:54:21.010918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.672 [2024-07-24 19:54:21.028644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.672 [2024-07-24 19:54:21.028681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.672 [2024-07-24 19:54:21.028700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.672 [2024-07-24 19:54:21.039438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.672 [2024-07-24 19:54:21.039467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.672 [2024-07-24 19:54:21.039498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.055959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.055996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.056029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.071209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.071267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.071292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.083634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.083664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.083696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.099033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.099063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.099095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.113304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.113335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.113366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.124181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.124217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.124236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.137772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.137808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:9007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.137827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.152164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.152195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.152227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.166523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.166554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.166585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.179367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.179417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.179438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.191300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.191345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.191360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.205859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.205889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.205919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.218732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.218762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.218794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.232146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.232177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.232208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.244097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.244134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.244153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.258667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.258704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.258724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.271769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.271822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.271841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.284761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.284797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.284824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.297794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.297831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.297850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.956 [2024-07-24 19:54:21.311196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:03.956 [2024-07-24 19:54:21.311236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.956 [2024-07-24 19:54:21.311266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.222 [2024-07-24 19:54:21.324675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.222 [2024-07-24 19:54:21.324712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.222 [2024-07-24 19:54:21.324732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.222 [2024-07-24 19:54:21.342050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.222 [2024-07-24 19:54:21.342098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.222 [2024-07-24 19:54:21.342115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.222 [2024-07-24 19:54:21.357975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.222 [2024-07-24 19:54:21.358005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.222 [2024-07-24 19:54:21.358037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.370047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.370083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.370102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.385534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.385570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.385590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.397155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.397185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.397219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.413639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.413670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.413702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.428886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.428922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.428956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.440541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.440571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.440603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.454603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.454655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.454676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.472849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.472880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.472911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.484437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.484466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.484498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.501923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.501959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.501978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.516544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.516574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.516604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.528106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.528142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.528167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.543644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.543680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:14328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.543699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.559544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.559574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.559605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.571957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.571993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.572012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.223 [2024-07-24 19:54:21.588052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.223 [2024-07-24 19:54:21.588087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.223 [2024-07-24 19:54:21.588105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.483 [2024-07-24 19:54:21.603396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.483 [2024-07-24 19:54:21.603454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-07-24 19:54:21.603480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.483 [2024-07-24 19:54:21.615675] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.483 [2024-07-24 19:54:21.615707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-07-24 19:54:21.615741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.483 [2024-07-24 19:54:21.631689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.483 [2024-07-24 19:54:21.631735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-07-24 19:54:21.631751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.483 [2024-07-24 19:54:21.649605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.483 [2024-07-24 19:54:21.649637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-07-24 19:54:21.649670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.483 [2024-07-24 19:54:21.662748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.483 [2024-07-24 19:54:21.662790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-07-24 19:54:21.662810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.483 [2024-07-24 19:54:21.679100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.483 [2024-07-24 19:54:21.679131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.483 [2024-07-24 19:54:21.679163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.483 [2024-07-24 19:54:21.694722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.483 [2024-07-24 19:54:21.694754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-07-24 19:54:21.694786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.484 [2024-07-24 19:54:21.707043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.484 [2024-07-24 19:54:21.707080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-07-24 19:54:21.707098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.484 [2024-07-24 19:54:21.722897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.484 [2024-07-24 19:54:21.722943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-07-24 19:54:21.722961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.484 [2024-07-24 19:54:21.735654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.484 [2024-07-24 19:54:21.735691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-07-24 19:54:21.735712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.484 [2024-07-24 19:54:21.750693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.484 [2024-07-24 19:54:21.750738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-07-24 19:54:21.750754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.484 [2024-07-24 19:54:21.763934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.484 [2024-07-24 19:54:21.763971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-07-24 19:54:21.763990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.484 [2024-07-24 19:54:21.777508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.484 [2024-07-24 19:54:21.777556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-07-24 19:54:21.777573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.484 [2024-07-24 19:54:21.791529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.484 [2024-07-24 19:54:21.791559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-07-24 19:54:21.791591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.484 [2024-07-24 19:54:21.805225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.484 [2024-07-24 19:54:21.805269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-07-24 19:54:21.805290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.484 [2024-07-24 19:54:21.817996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.484 [2024-07-24 19:54:21.818027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-07-24 19:54:21.818059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.484 [2024-07-24 19:54:21.831098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.484 [2024-07-24 19:54:21.831128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-07-24 19:54:21.831159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.484 [2024-07-24 19:54:21.844265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.484 [2024-07-24 19:54:21.844296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-07-24 19:54:21.844328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.484 [2024-07-24 19:54:21.857678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.484 [2024-07-24 19:54:21.857724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.484 [2024-07-24 19:54:21.857742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:21.869503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:21.869534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:21.869566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:21.883540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:21.883571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:21.883603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:21.897687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:21.897718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:14914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:21.897767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:21.909763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:21.909799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:21.909818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:21.925186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:21.925220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:21.925265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:21.939515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:21.939546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:21.939577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:21.955450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:21.955481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:21.955514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:21.967476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:21.967512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:21.967546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:21.982972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:21.983003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:21.983034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:21.993848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:21.993877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:21.993908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:22.010878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:22.010914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:22.010934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:22.028397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:22.028433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:22.028466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:22.045927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:22.045958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:22.045990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:22.060995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:22.061028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:22.061064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:22.072378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:22.072407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:22.072438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:22.087618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:22.087670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:22.087693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:22.100625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:22.100661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:22.100695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:04.743 [2024-07-24 19:54:22.112431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:04.743 [2024-07-24 19:54:22.112462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.743 [2024-07-24 19:54:22.112494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.002 [2024-07-24 19:54:22.126542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.002 [2024-07-24 19:54:22.126571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.002 [2024-07-24 19:54:22.126587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.002 [2024-07-24 19:54:22.142176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.002 [2024-07-24 19:54:22.142207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.002 [2024-07-24 19:54:22.142252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.002 [2024-07-24 19:54:22.154396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.002 [2024-07-24 19:54:22.154425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.002 [2024-07-24 19:54:22.154456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.002 [2024-07-24 19:54:22.170842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.002 [2024-07-24 19:54:22.170878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.002 [2024-07-24 19:54:22.170897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.003 [2024-07-24 19:54:22.183836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.003 [2024-07-24 19:54:22.183872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.003 [2024-07-24 19:54:22.183891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.003 [2024-07-24 19:54:22.197562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.003 [2024-07-24 19:54:22.197593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.003 [2024-07-24 19:54:22.197625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.003 [2024-07-24 19:54:22.210615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.003 [2024-07-24 19:54:22.210646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.003 [2024-07-24 19:54:22.210683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.003 [2024-07-24 19:54:22.224814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.003 [2024-07-24 19:54:22.224847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.003 [2024-07-24 19:54:22.224887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.003 [2024-07-24 19:54:22.238300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.003 [2024-07-24 19:54:22.238330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.003 [2024-07-24 19:54:22.238363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.003 [2024-07-24 19:54:22.249723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.003 [2024-07-24 19:54:22.249759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.003 [2024-07-24 19:54:22.249778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.003 [2024-07-24 19:54:22.264805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.003 [2024-07-24 19:54:22.264850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.003 [2024-07-24 19:54:22.264871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.003 [2024-07-24 19:54:22.279137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.003 [2024-07-24 19:54:22.279189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.003 [2024-07-24 19:54:22.279210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.003 [2024-07-24 19:54:22.291629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.003 [2024-07-24 19:54:22.291660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.003 [2024-07-24 19:54:22.291700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.003 [2024-07-24 19:54:22.309518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.003 [2024-07-24 19:54:22.309561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.003 [2024-07-24 19:54:22.309577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.003 [2024-07-24 19:54:22.326965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.003 [2024-07-24 19:54:22.326996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.003 [2024-07-24 19:54:22.327028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.003 [2024-07-24 19:54:22.342479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.003 [2024-07-24 19:54:22.342511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.003 [2024-07-24 19:54:22.342549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.003 [2024-07-24 19:54:22.354552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.003 [2024-07-24 19:54:22.354584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.003 [2024-07-24 19:54:22.354616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.003 [2024-07-24 19:54:22.370844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.003 [2024-07-24 19:54:22.370874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.003 [2024-07-24 19:54:22.370907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.261 [2024-07-24 19:54:22.383769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.261 [2024-07-24 19:54:22.383806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.261 [2024-07-24 19:54:22.383825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.261 [2024-07-24 19:54:22.398106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.261 [2024-07-24 19:54:22.398143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.261 [2024-07-24 19:54:22.398162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.261 [2024-07-24 19:54:22.413959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.261 [2024-07-24 19:54:22.413994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.261 [2024-07-24 19:54:22.414014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.261 [2024-07-24 19:54:22.425627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.261 [2024-07-24 19:54:22.425663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.261 [2024-07-24 19:54:22.425682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.261 [2024-07-24 19:54:22.438807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.261 [2024-07-24 19:54:22.438837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.261 [2024-07-24 19:54:22.438867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.261 [2024-07-24 19:54:22.454824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.261 [2024-07-24 19:54:22.454856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.261 [2024-07-24 19:54:22.454888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.261 [2024-07-24 19:54:22.471770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.261 [2024-07-24 19:54:22.471801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.261 [2024-07-24 19:54:22.471833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.261 [2024-07-24 19:54:22.482816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.261 [2024-07-24 19:54:22.482853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.261 [2024-07-24 19:54:22.482872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.261 [2024-07-24 19:54:22.499132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.261 [2024-07-24 19:54:22.499168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.261 [2024-07-24 19:54:22.499188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.261 [2024-07-24 19:54:22.514151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.262 [2024-07-24 19:54:22.514182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.262 [2024-07-24 19:54:22.514218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.262 [2024-07-24 19:54:22.527207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.262 [2024-07-24 19:54:22.527252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.262 [2024-07-24 19:54:22.527274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.262 [2024-07-24 19:54:22.541344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.262 [2024-07-24 19:54:22.541393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.262 [2024-07-24 19:54:22.541413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.262 [2024-07-24 19:54:22.554167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.262 [2024-07-24 19:54:22.554203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.262 [2024-07-24 19:54:22.554223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.262 [2024-07-24 19:54:22.566727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.262 [2024-07-24 19:54:22.566757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.262 [2024-07-24 19:54:22.566774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.262 [2024-07-24 19:54:22.580282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.262 [2024-07-24 19:54:22.580331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.262 [2024-07-24 19:54:22.580347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.262 [2024-07-24 19:54:22.594639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.262 [2024-07-24 19:54:22.594684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.262 [2024-07-24 19:54:22.594702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.262 [2024-07-24 19:54:22.608515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.262 [2024-07-24 19:54:22.608546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.262 [2024-07-24 19:54:22.608577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.262 [2024-07-24 19:54:22.621223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.262 [2024-07-24 19:54:22.621265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.262 [2024-07-24 19:54:22.621300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.262 [2024-07-24 19:54:22.633383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.262 [2024-07-24 19:54:22.633420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.262 [2024-07-24 19:54:22.633457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.520 [2024-07-24 19:54:22.646499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.520 [2024-07-24 19:54:22.646548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.520 [2024-07-24 19:54:22.646568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.520 [2024-07-24 19:54:22.662536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f66cb0) 00:25:05.520 [2024-07-24 19:54:22.662572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.520 [2024-07-24 19:54:22.662592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:05.520 00:25:05.520 Latency(us) 00:25:05.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.520 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:05.520 nvme0n1 : 2.00 18057.47 70.54 0.00 0.00 7079.00 3592.34 19709.35 00:25:05.520 =================================================================================================================== 00:25:05.520 Total : 18057.47 70.54 0.00 0.00 7079.00 3592.34 19709.35 00:25:05.520 0 00:25:05.520 19:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:05.520 19:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:05.520 19:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:05.520 | .driver_specific 00:25:05.520 | .nvme_error 00:25:05.520 | .status_code 00:25:05.520 | .command_transient_transport_error' 00:25:05.520 19:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:05.778 19:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:25:05.778 19:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1277255 00:25:05.778 19:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' -z 1277255 ']' 00:25:05.778 19:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # kill -0 1277255 00:25:05.778 19:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # uname 00:25:05.778 19:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:25:05.778 19:54:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1277255 00:25:05.778 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:25:05.778 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:25:05.778 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1277255' 00:25:05.778 killing process with pid 1277255 00:25:05.778 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # kill 1277255 00:25:05.778 Received shutdown signal, test time was about 2.000000 seconds 00:25:05.778 00:25:05.778 Latency(us) 00:25:05.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.778 =================================================================================================================== 00:25:05.778 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:05.778 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@975 -- # wait 1277255 00:25:06.038 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:06.038 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:06.038 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:06.038 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:06.038 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:06.039 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1277688 00:25:06.039 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:06.039 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1277688 /var/tmp/bperf.sock 00:25:06.039 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # '[' -z 1277688 ']' 00:25:06.039 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:06.039 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local max_retries=100 00:25:06.039 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:06.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:06.039 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@841 -- # xtrace_disable 00:25:06.039 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.039 [2024-07-24 19:54:23.303724] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:25:06.039 [2024-07-24 19:54:23.303803] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277688 ] 00:25:06.039 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:06.039 Zero copy mechanism will not be used. 00:25:06.039 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.039 [2024-07-24 19:54:23.363699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.297 [2024-07-24 19:54:23.477188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.297 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:25:06.297 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@865 -- # return 0 00:25:06.297 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:06.297 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:06.555 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:06.555 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:06.555 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:06.555 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:06.555 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.555 19:54:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:07.125 nvme0n1 00:25:07.125 19:54:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:07.125 19:54:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:07.125 19:54:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:07.125 19:54:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:07.125 19:54:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:07.125 19:54:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:07.125 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:07.125 Zero copy mechanism will not be used. 00:25:07.125 Running I/O for 2 seconds... 00:25:07.125 [2024-07-24 19:54:24.415321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.125 [2024-07-24 19:54:24.415388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.125 [2024-07-24 19:54:24.415408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.125 [2024-07-24 19:54:24.421878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.125 [2024-07-24 19:54:24.421910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.125 [2024-07-24 19:54:24.421929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.125 [2024-07-24 19:54:24.427572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.125 [2024-07-24 19:54:24.427616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.125 [2024-07-24 19:54:24.427632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.125 [2024-07-24 19:54:24.434107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.125 [2024-07-24 19:54:24.434152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.125 [2024-07-24 19:54:24.434169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.125 [2024-07-24 19:54:24.440484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.125 [2024-07-24 19:54:24.440514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.125 [2024-07-24 19:54:24.440530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.125 [2024-07-24 19:54:24.446846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.125 [2024-07-24 19:54:24.446890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.126 [2024-07-24 19:54:24.446916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.126 [2024-07-24 19:54:24.453307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.126 [2024-07-24 19:54:24.453337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.126 [2024-07-24 19:54:24.453353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.126 [2024-07-24 19:54:24.459777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.126 [2024-07-24 19:54:24.459819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.126 [2024-07-24 19:54:24.459835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.126 [2024-07-24 19:54:24.466292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.126 [2024-07-24 19:54:24.466322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.126 [2024-07-24 19:54:24.466339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.126 [2024-07-24 19:54:24.472647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.126 [2024-07-24 19:54:24.472691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.126 [2024-07-24 19:54:24.472706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.126 [2024-07-24 19:54:24.479123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.126 [2024-07-24 19:54:24.479155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.126 [2024-07-24 19:54:24.479172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.126 [2024-07-24 19:54:24.485511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.126 [2024-07-24 19:54:24.485556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.126 [2024-07-24 19:54:24.485572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.126 [2024-07-24 19:54:24.491942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.126 [2024-07-24 19:54:24.491987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.126 [2024-07-24 19:54:24.492003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.126 [2024-07-24 19:54:24.498368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.126 [2024-07-24 19:54:24.498398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.126 [2024-07-24 19:54:24.498415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.386 [2024-07-24 19:54:24.504799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.386 [2024-07-24 19:54:24.504836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.386 [2024-07-24 19:54:24.504854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.386 [2024-07-24 19:54:24.511321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.386 [2024-07-24 19:54:24.511352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.386 [2024-07-24 19:54:24.511368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.386 [2024-07-24 19:54:24.518160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.386 [2024-07-24 19:54:24.518193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.386 [2024-07-24 19:54:24.518212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.386 [2024-07-24 19:54:24.525171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.386 [2024-07-24 19:54:24.525204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.386 [2024-07-24 19:54:24.525222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.386 [2024-07-24 19:54:24.532145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.386 [2024-07-24 19:54:24.532178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.386 [2024-07-24 19:54:24.532197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.386 [2024-07-24 19:54:24.539093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.386 [2024-07-24 19:54:24.539125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.386 [2024-07-24 19:54:24.539144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.386 [2024-07-24 19:54:24.545998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.386 [2024-07-24 19:54:24.546030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.386 [2024-07-24 19:54:24.546049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.386 [2024-07-24 19:54:24.553057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.386 [2024-07-24 19:54:24.553089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.386 [2024-07-24 19:54:24.553107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.386 [2024-07-24 19:54:24.560085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.386 [2024-07-24 19:54:24.560117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.386 [2024-07-24 19:54:24.560135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.386 [2024-07-24 19:54:24.567048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.386 [2024-07-24 19:54:24.567081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.386 [2024-07-24 19:54:24.567098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.386 [2024-07-24 19:54:24.574030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.386 [2024-07-24 19:54:24.574061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.386 [2024-07-24 19:54:24.574079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.386 [2024-07-24 19:54:24.581107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.386 [2024-07-24 19:54:24.581139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.386 [2024-07-24 19:54:24.581157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.386 [2024-07-24 19:54:24.588086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.386 [2024-07-24 19:54:24.588118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.588136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.595036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.595069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.595088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.602062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.602094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.602112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.609065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.609098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.609117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.616042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.616075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.616093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.623071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.623103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.623128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.630098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.630131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.630148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.637192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.637224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.637250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.644218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.644258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.644294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.651213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.651253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.651288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.658382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.658413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.658430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.665463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.665509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.665525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.672449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.672479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.672495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.679442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.679472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.679488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.686515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.686564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.686595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.693537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.693571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.693589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.700530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.700578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.700596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.707552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.707596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.707614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.714579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.714613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.714632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.721628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.721660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.721678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.728798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.728832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.728850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.735831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.735865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.735883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.742809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.742842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.742867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.749919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.749952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.749970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.756996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.757029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.757047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.387 [2024-07-24 19:54:24.764028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.387 [2024-07-24 19:54:24.764061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.387 [2024-07-24 19:54:24.764079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.770973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.771006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.771024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.777941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.777973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.777992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.784887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.784919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.784937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.791884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.791916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.791934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.798914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.798948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.798966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.805605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.805653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.805672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.812609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.812643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.812663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.819642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.819675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.819692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.826934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.826969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.826988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.834037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.834070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.834088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.841156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.841190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.841207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.848225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.848267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.848301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.855224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.855265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.855284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.862191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.862223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.862249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.869190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.869223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.869240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.876175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.876207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.876226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.883189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.883222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.883240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.890206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.890238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.890266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.897210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.897251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.897271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.904239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.904279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.904297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.911172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.911204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.911221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.647 [2024-07-24 19:54:24.918427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.647 [2024-07-24 19:54:24.918457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.647 [2024-07-24 19:54:24.918473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.648 [2024-07-24 19:54:24.925511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.648 [2024-07-24 19:54:24.925541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.648 [2024-07-24 19:54:24.925564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.648 [2024-07-24 19:54:24.932521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.648 [2024-07-24 19:54:24.932567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.648 [2024-07-24 19:54:24.932586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.648 [2024-07-24 19:54:24.939516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.648 [2024-07-24 19:54:24.939562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.648 [2024-07-24 19:54:24.939581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.648 [2024-07-24 19:54:24.946443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.648 [2024-07-24 19:54:24.946472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.648 [2024-07-24 19:54:24.946488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.648 [2024-07-24 19:54:24.953463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.648 [2024-07-24 19:54:24.953491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.648 [2024-07-24 19:54:24.953508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.648 [2024-07-24 19:54:24.960461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.648 [2024-07-24 19:54:24.960490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.648 [2024-07-24 19:54:24.960506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.648 [2024-07-24 19:54:24.967477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.648 [2024-07-24 19:54:24.967521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.648 [2024-07-24 19:54:24.967540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.648 [2024-07-24 19:54:24.974478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.648 [2024-07-24 19:54:24.974506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.648 [2024-07-24 19:54:24.974523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.648 [2024-07-24 19:54:24.981402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.648 [2024-07-24 19:54:24.981431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.648 [2024-07-24 19:54:24.981447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.648 [2024-07-24 19:54:24.988390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.648 [2024-07-24 19:54:24.988427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.648 [2024-07-24 19:54:24.988444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.648 [2024-07-24 19:54:24.995478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.648 [2024-07-24 19:54:24.995520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.648 [2024-07-24 19:54:24.995536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.648 [2024-07-24 19:54:25.002527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.648 [2024-07-24 19:54:25.002558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.648 [2024-07-24 19:54:25.002575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.648 [2024-07-24 19:54:25.009649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.648 [2024-07-24 19:54:25.009694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.648 [2024-07-24 19:54:25.009712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.648 [2024-07-24 19:54:25.016554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.648 [2024-07-24 19:54:25.016588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.648 [2024-07-24 19:54:25.016607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.648 [2024-07-24 19:54:25.023508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.648 [2024-07-24 19:54:25.023538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.648 [2024-07-24 19:54:25.023554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.030624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.030656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.030675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.037683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.037725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.037743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.044826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.044860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.044877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.051956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.051995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.052014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.059229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.059272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.059294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.066293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.066323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.066339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.073390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.073421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.073437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.080404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.080435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.080450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.087401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.087432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.087448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.094457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.094486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.094502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.101581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.101613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.101631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.108674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.108708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.108733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.115813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.115846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.115864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.123068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.123110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.123129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.130238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.130280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.130312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.137299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.137329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.137345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.144368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.144398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.906 [2024-07-24 19:54:25.144414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.906 [2024-07-24 19:54:25.151485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.906 [2024-07-24 19:54:25.151531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.151549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.158537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.158566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.158582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.165553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.165600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.165618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.172622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.172662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.172681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.179619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.179652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.179670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.186680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.186713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.186731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.193615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.193646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.193664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.200610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.200643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.200661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.207589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.207622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.207640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.214553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.214586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.214604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.221500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.221529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.221545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.228489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.228518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.228534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.235462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.235491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.235507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.242465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.242494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.242509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.249673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.249707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.249726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.256721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.256753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.256772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.263693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.263725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.263744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.270711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.270743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.270761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.277815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.277848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.277866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:07.907 [2024-07-24 19:54:25.284752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:07.907 [2024-07-24 19:54:25.284785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.907 [2024-07-24 19:54:25.284803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.291751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.291790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.291809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.298783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.298817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.298835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.305920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.305952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.305971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.313249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.313297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.313314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.322770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.322804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.322822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.331173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.331207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.331227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.339524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.339575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.339594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.346836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.346880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.346900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.354236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.354299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.354316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.361358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.361393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.361409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.368663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.368704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.368723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.376173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.376211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.376229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.383783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.383819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.383838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.391616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.391656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.391674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.399487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.399532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.399549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.407370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.407404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.407422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.414633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.414670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.414690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.421900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.421935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.421963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.429166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.429200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.429219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.166 [2024-07-24 19:54:25.436352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.166 [2024-07-24 19:54:25.436382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.166 [2024-07-24 19:54:25.436398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.167 [2024-07-24 19:54:25.444286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.167 [2024-07-24 19:54:25.444336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.167 [2024-07-24 19:54:25.444353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.167 [2024-07-24 19:54:25.449378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.167 [2024-07-24 19:54:25.449410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.167 [2024-07-24 19:54:25.449426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.167 [2024-07-24 19:54:25.455578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.167 [2024-07-24 19:54:25.455613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.167 [2024-07-24 19:54:25.455632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.167 [2024-07-24 19:54:25.463391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.167 [2024-07-24 19:54:25.463422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.167 [2024-07-24 19:54:25.463455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.167 [2024-07-24 19:54:25.470599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.167 [2024-07-24 19:54:25.470632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.167 [2024-07-24 19:54:25.470651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.167 [2024-07-24 19:54:25.477963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.167 [2024-07-24 19:54:25.477998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.167 [2024-07-24 19:54:25.478017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.167 [2024-07-24 19:54:25.485028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.167 [2024-07-24 19:54:25.485075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.167 [2024-07-24 19:54:25.485094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.167 [2024-07-24 19:54:25.492605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.167 [2024-07-24 19:54:25.492640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.167 [2024-07-24 19:54:25.492659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.167 [2024-07-24 19:54:25.500331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.167 [2024-07-24 19:54:25.500374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.167 [2024-07-24 19:54:25.500389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.167 [2024-07-24 19:54:25.508054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.167 [2024-07-24 19:54:25.508088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.167 [2024-07-24 19:54:25.508107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.167 [2024-07-24 19:54:25.515632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.167 [2024-07-24 19:54:25.515668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.167 [2024-07-24 19:54:25.515687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.167 [2024-07-24 19:54:25.522922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.167 [2024-07-24 19:54:25.522957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.167 [2024-07-24 19:54:25.522976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.167 [2024-07-24 19:54:25.530398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.167 [2024-07-24 19:54:25.530429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.167 [2024-07-24 19:54:25.530445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.167 [2024-07-24 19:54:25.537045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.167 [2024-07-24 19:54:25.537079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.167 [2024-07-24 19:54:25.537097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.167 [2024-07-24 19:54:25.543721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.167 [2024-07-24 19:54:25.543755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.167 [2024-07-24 19:54:25.543773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.550583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.550631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.550649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.554462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.554490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.554506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.561459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.561488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.561512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.568435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.568462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.568502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.575471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.575499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.575514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.582483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.582512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.582542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.589417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.589446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.589480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.596433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.596461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.596476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.603466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.603492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.603527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.610474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.610504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.610520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.617551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.617596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.617614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.624537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.624583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.624601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.631553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.631581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.631612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.638597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.638629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.638647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.645612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.645644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.645662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.652696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.652728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.652745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.660057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.660090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.660108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.666977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.667010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.667027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.673965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.673997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.674015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.680933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.680965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.680983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.688053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.688085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.688104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.695091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.695123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.426 [2024-07-24 19:54:25.695141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.426 [2024-07-24 19:54:25.702033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.426 [2024-07-24 19:54:25.702065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.427 [2024-07-24 19:54:25.702083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.427 [2024-07-24 19:54:25.709154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.427 [2024-07-24 19:54:25.709186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.427 [2024-07-24 19:54:25.709204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.427 [2024-07-24 19:54:25.716197] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.427 [2024-07-24 19:54:25.716229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.427 [2024-07-24 19:54:25.716256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.427 [2024-07-24 19:54:25.723301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.427 [2024-07-24 19:54:25.723329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.427 [2024-07-24 19:54:25.723368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.427 [2024-07-24 19:54:25.730310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.427 [2024-07-24 19:54:25.730338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.427 [2024-07-24 19:54:25.730369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.427 [2024-07-24 19:54:25.737311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.427 [2024-07-24 19:54:25.737340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.427 [2024-07-24 19:54:25.737356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.427 [2024-07-24 19:54:25.744302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.427 [2024-07-24 19:54:25.744330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.427 [2024-07-24 19:54:25.744360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.427 [2024-07-24 19:54:25.751423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.427 [2024-07-24 19:54:25.751451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.427 [2024-07-24 19:54:25.751483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.427 [2024-07-24 19:54:25.758345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.427 [2024-07-24 19:54:25.758374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.427 [2024-07-24 19:54:25.758406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.427 [2024-07-24 19:54:25.765389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.427 [2024-07-24 19:54:25.765431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.427 [2024-07-24 19:54:25.765446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.427 [2024-07-24 19:54:25.772306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.427 [2024-07-24 19:54:25.772334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.427 [2024-07-24 19:54:25.772365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.427 [2024-07-24 19:54:25.779378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.427 [2024-07-24 19:54:25.779422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.427 [2024-07-24 19:54:25.779438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.427 [2024-07-24 19:54:25.786397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.427 [2024-07-24 19:54:25.786431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.427 [2024-07-24 19:54:25.786448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.427 [2024-07-24 19:54:25.793284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.427 [2024-07-24 19:54:25.793334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.427 [2024-07-24 19:54:25.793350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.427 [2024-07-24 19:54:25.800193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.427 [2024-07-24 19:54:25.800225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.427 [2024-07-24 19:54:25.800250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.685 [2024-07-24 19:54:25.807113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.685 [2024-07-24 19:54:25.807145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.685 [2024-07-24 19:54:25.807163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.685 [2024-07-24 19:54:25.814115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.685 [2024-07-24 19:54:25.814149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.685 [2024-07-24 19:54:25.814169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.821134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.821166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.821184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.828192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.828223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.828249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.835256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.835307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.835321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.842230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.842269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.842288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.849195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.849227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.849253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.856334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.856381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.856396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.863331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.863360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.863376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.870403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.870446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.870461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.877503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.877546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.877562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.884703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.884736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.884754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.891724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.891757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.891775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.898688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.898720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.898738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.905757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.905788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.905812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.912615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.912648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.912665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.919628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.919660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.919678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.926573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.926619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.926637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.933646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.933676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.933694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.940403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.940430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.940462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.948846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.948880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.948899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.958428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.958473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.958490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.967234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.967289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.967306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.976527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.976556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.976572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.985852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.985886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.985905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:25.995030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:25.995065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:25.995084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:26.004091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:26.004125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:26.004144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:26.013527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:26.013576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.686 [2024-07-24 19:54:26.013595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.686 [2024-07-24 19:54:26.023062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.686 [2024-07-24 19:54:26.023097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.687 [2024-07-24 19:54:26.023116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.687 [2024-07-24 19:54:26.032224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.687 [2024-07-24 19:54:26.032268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.687 [2024-07-24 19:54:26.032301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.687 [2024-07-24 19:54:26.037430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.687 [2024-07-24 19:54:26.037460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.687 [2024-07-24 19:54:26.037476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.687 [2024-07-24 19:54:26.046832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.687 [2024-07-24 19:54:26.046866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.687 [2024-07-24 19:54:26.046890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.687 [2024-07-24 19:54:26.056256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.687 [2024-07-24 19:54:26.056289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.687 [2024-07-24 19:54:26.056322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.687 [2024-07-24 19:54:26.063852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.687 [2024-07-24 19:54:26.063885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.687 [2024-07-24 19:54:26.063904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.945 [2024-07-24 19:54:26.070669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.945 [2024-07-24 19:54:26.070701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.945 [2024-07-24 19:54:26.070719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.945 [2024-07-24 19:54:26.077684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.945 [2024-07-24 19:54:26.077717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.945 [2024-07-24 19:54:26.077735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.084497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.084541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.084556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.091569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.091601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.091619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.098620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.098652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.098669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.105572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.105600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.105633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.112649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.112689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.112708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.119844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.119877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.119895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.126875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.126907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.126925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.133746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.133778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.133796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.140823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.140855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.140873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.147650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.147683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.147701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.154613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.154645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.154662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.161542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.161591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.161608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.168626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.168660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.168678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.175654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.175687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.175706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.182578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.182611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.182629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.189599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.189631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.189648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.196513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.196542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.196558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.203022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.203055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.203073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.209927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.209960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.209978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.217501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.217532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.217552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.225796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.225831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.225851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.233519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.233568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.233593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.240873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.240907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.240925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.247941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.247982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.248001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.254984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.255019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.255037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.262022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.262054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.946 [2024-07-24 19:54:26.262072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.946 [2024-07-24 19:54:26.268965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.946 [2024-07-24 19:54:26.268998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.947 [2024-07-24 19:54:26.269016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.947 [2024-07-24 19:54:26.276284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.947 [2024-07-24 19:54:26.276332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.947 [2024-07-24 19:54:26.276348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.947 [2024-07-24 19:54:26.283340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.947 [2024-07-24 19:54:26.283370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.947 [2024-07-24 19:54:26.283386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.947 [2024-07-24 19:54:26.290387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.947 [2024-07-24 19:54:26.290426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.947 [2024-07-24 19:54:26.290443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:08.947 [2024-07-24 19:54:26.297425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.947 [2024-07-24 19:54:26.297458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.947 [2024-07-24 19:54:26.297475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:08.947 [2024-07-24 19:54:26.304490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.947 [2024-07-24 19:54:26.304519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.947 [2024-07-24 19:54:26.304535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:08.947 [2024-07-24 19:54:26.311532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.947 [2024-07-24 19:54:26.311562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.947 [2024-07-24 19:54:26.311578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:08.947 [2024-07-24 19:54:26.318600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:08.947 [2024-07-24 19:54:26.318632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:08.947 [2024-07-24 19:54:26.318650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.232 [2024-07-24 19:54:26.325601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:09.232 [2024-07-24 19:54:26.325633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.232 [2024-07-24 19:54:26.325651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.232 [2024-07-24 19:54:26.332571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:09.232 [2024-07-24 19:54:26.332618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.232 [2024-07-24 19:54:26.332636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.232 [2024-07-24 19:54:26.339472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:09.232 [2024-07-24 19:54:26.339500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.232 [2024-07-24 19:54:26.339517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.232 [2024-07-24 19:54:26.346686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:09.232 [2024-07-24 19:54:26.346719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.232 [2024-07-24 19:54:26.346737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.232 [2024-07-24 19:54:26.353808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:09.232 [2024-07-24 19:54:26.353840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.232 [2024-07-24 19:54:26.353859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.232 [2024-07-24 19:54:26.360916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:09.232 [2024-07-24 19:54:26.360948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.232 [2024-07-24 19:54:26.360965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.232 [2024-07-24 19:54:26.368225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:09.232 [2024-07-24 19:54:26.368266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.232 [2024-07-24 19:54:26.368303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.232 [2024-07-24 19:54:26.375217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:09.232 [2024-07-24 19:54:26.375256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.233 [2024-07-24 19:54:26.375275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.233 [2024-07-24 19:54:26.382283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:09.233 [2024-07-24 19:54:26.382312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.233 [2024-07-24 19:54:26.382328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:09.233 [2024-07-24 19:54:26.389386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:09.233 [2024-07-24 19:54:26.389415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.233 [2024-07-24 19:54:26.389431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:09.233 [2024-07-24 19:54:26.396419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:09.233 [2024-07-24 19:54:26.396448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.233 [2024-07-24 19:54:26.396464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:09.233 [2024-07-24 19:54:26.403559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x11b9290) 00:25:09.233 [2024-07-24 19:54:26.403607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:09.233 [2024-07-24 19:54:26.403624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:09.233 00:25:09.233 Latency(us) 00:25:09.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.233 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:09.233 nvme0n1 : 2.00 4350.15 543.77 0.00 0.00 3673.57 885.95 10777.03 00:25:09.233 =================================================================================================================== 00:25:09.233 Total : 4350.15 543.77 0.00 0.00 3673.57 885.95 10777.03 00:25:09.233 0 00:25:09.233 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:09.233 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:09.233 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:09.233 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:09.233 | .driver_specific 00:25:09.233 | .nvme_error 00:25:09.233 | .status_code 00:25:09.233 | .command_transient_transport_error' 00:25:09.491 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 280 > 0 )) 00:25:09.491 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1277688 00:25:09.491 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' -z 1277688 ']' 00:25:09.491 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # kill -0 1277688 00:25:09.491 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # uname 00:25:09.491 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:25:09.491 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1277688 00:25:09.491 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:25:09.491 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:25:09.491 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1277688' 00:25:09.491 killing process with pid 1277688 00:25:09.491 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # kill 1277688 00:25:09.491 Received shutdown signal, test time was about 2.000000 seconds 00:25:09.491 00:25:09.491 Latency(us) 00:25:09.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.491 =================================================================================================================== 00:25:09.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.491 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@975 -- # wait 1277688 00:25:09.749 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:09.749 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:09.749 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:09.749 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:09.749 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:09.749 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1278098 00:25:09.749 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:09.749 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1278098 /var/tmp/bperf.sock 00:25:09.749 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # '[' -z 1278098 ']' 00:25:09.749 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:09.749 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local max_retries=100 00:25:09.749 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:09.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:09.749 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@841 -- # xtrace_disable 00:25:09.749 19:54:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:09.749 [2024-07-24 19:54:27.020820] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:25:09.749 [2024-07-24 19:54:27.020898] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278098 ] 00:25:09.749 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.749 [2024-07-24 19:54:27.078590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.007 [2024-07-24 19:54:27.187502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.007 19:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:25:10.007 19:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@865 -- # return 0 00:25:10.007 19:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:10.007 19:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:10.264 19:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:10.264 19:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:10.264 19:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.264 19:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:10.264 19:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:10.264 19:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:10.830 nvme0n1 00:25:10.830 19:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:10.830 19:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:10.830 19:54:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.830 19:54:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:10.830 19:54:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:10.830 19:54:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:10.830 Running I/O for 2 seconds... 00:25:10.830 [2024-07-24 19:54:28.140317] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190ee5c8 00:25:10.830 [2024-07-24 19:54:28.141436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.830 [2024-07-24 19:54:28.141476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:10.830 [2024-07-24 19:54:28.152571] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fac10 00:25:10.830 [2024-07-24 19:54:28.153604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.831 [2024-07-24 19:54:28.153646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:10.831 [2024-07-24 19:54:28.165975] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190eaef0 00:25:10.831 [2024-07-24 19:54:28.167168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.831 [2024-07-24 19:54:28.167201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:10.831 [2024-07-24 19:54:28.179404] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e1b48 00:25:10.831 [2024-07-24 19:54:28.180772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.831 [2024-07-24 19:54:28.180804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:10.831 [2024-07-24 19:54:28.192767] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e84c0 00:25:10.831 [2024-07-24 19:54:28.194277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.831 [2024-07-24 19:54:28.194322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:10.831 [2024-07-24 19:54:28.206174] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fac10 00:25:10.831 [2024-07-24 19:54:28.207887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.831 [2024-07-24 19:54:28.207919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.219678] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f1430 00:25:11.089 [2024-07-24 19:54:28.221550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.089 [2024-07-24 19:54:28.221582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.233110] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e3498 00:25:11.089 [2024-07-24 19:54:28.235135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.089 [2024-07-24 19:54:28.235166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.242199] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190df988 00:25:11.089 [2024-07-24 19:54:28.243044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.089 [2024-07-24 19:54:28.243074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.254233] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f35f0 00:25:11.089 [2024-07-24 19:54:28.255070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.089 [2024-07-24 19:54:28.255101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.267535] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fd208 00:25:11.089 [2024-07-24 19:54:28.268562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.089 [2024-07-24 19:54:28.268588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.280918] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e23b8 00:25:11.089 [2024-07-24 19:54:28.282111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.089 [2024-07-24 19:54:28.282142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.294267] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190eb760 00:25:11.089 [2024-07-24 19:54:28.295622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.089 [2024-07-24 19:54:28.295652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.306202] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e8088 00:25:11.089 [2024-07-24 19:54:28.307022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.089 [2024-07-24 19:54:28.307052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.319053] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190ddc00 00:25:11.089 [2024-07-24 19:54:28.319752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.089 [2024-07-24 19:54:28.319784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.333717] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fda78 00:25:11.089 [2024-07-24 19:54:28.335435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.089 [2024-07-24 19:54:28.335462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.347151] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f3e60 00:25:11.089 [2024-07-24 19:54:28.348991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.089 [2024-07-24 19:54:28.349022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.360521] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e1f80 00:25:11.089 [2024-07-24 19:54:28.362554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.089 [2024-07-24 19:54:28.362581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.369549] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f6890 00:25:11.089 [2024-07-24 19:54:28.370414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.089 [2024-07-24 19:54:28.370444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.381672] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fe720 00:25:11.089 [2024-07-24 19:54:28.382514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.089 [2024-07-24 19:54:28.382545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.394961] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fb480 00:25:11.089 [2024-07-24 19:54:28.395987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.089 [2024-07-24 19:54:28.396018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:11.089 [2024-07-24 19:54:28.408407] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e3060 00:25:11.090 [2024-07-24 19:54:28.409591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.090 [2024-07-24 19:54:28.409633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:11.090 [2024-07-24 19:54:28.422654] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e01f8 00:25:11.090 [2024-07-24 19:54:28.424001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.090 [2024-07-24 19:54:28.424034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:11.090 [2024-07-24 19:54:28.435861] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fbcf0 00:25:11.090 [2024-07-24 19:54:28.437426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.090 [2024-07-24 19:54:28.437454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:11.090 [2024-07-24 19:54:28.448019] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e8d30 00:25:11.090 [2024-07-24 19:54:28.449547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.090 [2024-07-24 19:54:28.449589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:11.090 [2024-07-24 19:54:28.461462] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f4b08 00:25:11.090 [2024-07-24 19:54:28.463167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.090 [2024-07-24 19:54:28.463198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.474839] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f0bc0 00:25:11.348 [2024-07-24 19:54:28.476724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.476755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.486687] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e3498 00:25:11.348 [2024-07-24 19:54:28.488035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.488065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.498349] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fd640 00:25:11.348 [2024-07-24 19:54:28.500183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.500214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.509426] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f20d8 00:25:11.348 [2024-07-24 19:54:28.510298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.510326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.522957] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e7818 00:25:11.348 [2024-07-24 19:54:28.523981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.524014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.536344] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e2c28 00:25:11.348 [2024-07-24 19:54:28.537531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.537559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.550567] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f1868 00:25:11.348 [2024-07-24 19:54:28.551950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.551981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.563706] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e5ec8 00:25:11.348 [2024-07-24 19:54:28.565219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.565258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.575727] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190dece0 00:25:11.348 [2024-07-24 19:54:28.577230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.577269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.589087] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e6b70 00:25:11.348 [2024-07-24 19:54:28.590751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.590783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.602374] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190eff18 00:25:11.348 [2024-07-24 19:54:28.604206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.604252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.615711] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e27f0 00:25:11.348 [2024-07-24 19:54:28.617743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.617774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.624718] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e23b8 00:25:11.348 [2024-07-24 19:54:28.625561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.625591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.638178] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fa3a0 00:25:11.348 [2024-07-24 19:54:28.639189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.639220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.650256] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190de8a8 00:25:11.348 [2024-07-24 19:54:28.651302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.651328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.663708] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e1b48 00:25:11.348 [2024-07-24 19:54:28.664887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.348 [2024-07-24 19:54:28.664917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:11.348 [2024-07-24 19:54:28.677052] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e0a68 00:25:11.348 [2024-07-24 19:54:28.678455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.349 [2024-07-24 19:54:28.678481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:11.349 [2024-07-24 19:54:28.690461] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190eb760 00:25:11.349 [2024-07-24 19:54:28.691959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.349 [2024-07-24 19:54:28.691989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:11.349 [2024-07-24 19:54:28.703802] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190de8a8 00:25:11.349 [2024-07-24 19:54:28.705509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.349 [2024-07-24 19:54:28.705540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:11.349 [2024-07-24 19:54:28.717111] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f8618 00:25:11.349 [2024-07-24 19:54:28.718956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.349 [2024-07-24 19:54:28.718986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.730483] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f7970 00:25:11.607 [2024-07-24 19:54:28.732612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.732643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.739464] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fac10 00:25:11.607 [2024-07-24 19:54:28.740320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.740350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.751465] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190dfdc0 00:25:11.607 [2024-07-24 19:54:28.752307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.752337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.764768] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e5ec8 00:25:11.607 [2024-07-24 19:54:28.765781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.765811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.778068] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e2c28 00:25:11.607 [2024-07-24 19:54:28.779258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.779289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.791491] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f1868 00:25:11.607 [2024-07-24 19:54:28.792850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.792881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.804740] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190dece0 00:25:11.607 [2024-07-24 19:54:28.806269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.806300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.817999] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e5ec8 00:25:11.607 [2024-07-24 19:54:28.819684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.819715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.831324] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e9e10 00:25:11.607 [2024-07-24 19:54:28.833141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.833172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.844561] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e3060 00:25:11.607 [2024-07-24 19:54:28.846484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.846512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.853335] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190eea00 00:25:11.607 [2024-07-24 19:54:28.854189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.854219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.866778] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f20d8 00:25:11.607 [2024-07-24 19:54:28.867801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.867831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.878938] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e38d0 00:25:11.607 [2024-07-24 19:54:28.879969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.880000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.892404] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f8618 00:25:11.607 [2024-07-24 19:54:28.893573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.893603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.905731] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e2c28 00:25:11.607 [2024-07-24 19:54:28.907078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.907109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.919105] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190eb760 00:25:11.607 [2024-07-24 19:54:28.920675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.920706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.932455] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e38d0 00:25:11.607 [2024-07-24 19:54:28.934152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.934188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.945807] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e6b70 00:25:11.607 [2024-07-24 19:54:28.947674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.947705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.959164] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e1710 00:25:11.607 [2024-07-24 19:54:28.961206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.961236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:11.607 [2024-07-24 19:54:28.968225] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e23b8 00:25:11.607 [2024-07-24 19:54:28.969091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.607 [2024-07-24 19:54:28.969121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:11.608 [2024-07-24 19:54:28.980328] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e0ea0 00:25:11.608 [2024-07-24 19:54:28.981155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.608 [2024-07-24 19:54:28.981185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:28.993802] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fc998 00:25:11.866 [2024-07-24 19:54:28.994813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.866 [2024-07-24 19:54:28.994844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:29.007114] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e3498 00:25:11.866 [2024-07-24 19:54:29.008309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.866 [2024-07-24 19:54:29.008351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:29.020405] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fe720 00:25:11.866 [2024-07-24 19:54:29.021749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.866 [2024-07-24 19:54:29.021781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:29.032324] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190dece0 00:25:11.866 [2024-07-24 19:54:29.033163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.866 [2024-07-24 19:54:29.033193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:29.043981] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f9f68 00:25:11.866 [2024-07-24 19:54:29.044813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.866 [2024-07-24 19:54:29.044844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:29.057375] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190eff18 00:25:11.866 [2024-07-24 19:54:29.058433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.866 [2024-07-24 19:54:29.058459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:29.070712] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f96f8 00:25:11.866 [2024-07-24 19:54:29.071867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.866 [2024-07-24 19:54:29.071898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:29.084059] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190ee190 00:25:11.866 [2024-07-24 19:54:29.085444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.866 [2024-07-24 19:54:29.085471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:29.096205] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f0350 00:25:11.866 [2024-07-24 19:54:29.097018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.866 [2024-07-24 19:54:29.097048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:29.109087] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fe2e8 00:25:11.866 [2024-07-24 19:54:29.109771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.866 [2024-07-24 19:54:29.109802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:29.122525] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e84c0 00:25:11.866 [2024-07-24 19:54:29.123421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.866 [2024-07-24 19:54:29.123449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:29.135885] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f9f68 00:25:11.866 [2024-07-24 19:54:29.136899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.866 [2024-07-24 19:54:29.136930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:29.147894] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e3498 00:25:11.866 [2024-07-24 19:54:29.149774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.866 [2024-07-24 19:54:29.149805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:29.158917] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190ed4e8 00:25:11.866 [2024-07-24 19:54:29.159761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.866 [2024-07-24 19:54:29.159791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:29.173152] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e5220 00:25:11.866 [2024-07-24 19:54:29.174190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.866 [2024-07-24 19:54:29.174221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:11.866 [2024-07-24 19:54:29.186327] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190de038 00:25:11.866 [2024-07-24 19:54:29.187515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.867 [2024-07-24 19:54:29.187558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:11.867 [2024-07-24 19:54:29.198408] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fa3a0 00:25:11.867 [2024-07-24 19:54:29.199595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.867 [2024-07-24 19:54:29.199626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:11.867 [2024-07-24 19:54:29.211781] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e1f80 00:25:11.867 [2024-07-24 19:54:29.213147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.867 [2024-07-24 19:54:29.213177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:11.867 [2024-07-24 19:54:29.225151] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f0ff8 00:25:11.867 [2024-07-24 19:54:29.226639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.867 [2024-07-24 19:54:29.226670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:11.867 [2024-07-24 19:54:29.238503] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f9f68 00:25:11.867 [2024-07-24 19:54:29.240196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.867 [2024-07-24 19:54:29.240226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:12.125 [2024-07-24 19:54:29.251948] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f20d8 00:25:12.125 [2024-07-24 19:54:29.253829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.125 [2024-07-24 19:54:29.253859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:12.125 [2024-07-24 19:54:29.263868] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e88f8 00:25:12.125 [2024-07-24 19:54:29.265184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.125 [2024-07-24 19:54:29.265220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:12.125 [2024-07-24 19:54:29.275373] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f7da8 00:25:12.125 [2024-07-24 19:54:29.277135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.125 [2024-07-24 19:54:29.277165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:12.125 [2024-07-24 19:54:29.287174] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f46d0 00:25:12.125 [2024-07-24 19:54:29.287995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:11834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.125 [2024-07-24 19:54:29.288025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:12.125 [2024-07-24 19:54:29.300362] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f20d8 00:25:12.125 [2024-07-24 19:54:29.301428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.125 [2024-07-24 19:54:29.301454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:12.125 [2024-07-24 19:54:29.314924] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190dfdc0 00:25:12.125 [2024-07-24 19:54:29.316595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.125 [2024-07-24 19:54:29.316626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:12.125 [2024-07-24 19:54:29.328266] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fac10 00:25:12.125 [2024-07-24 19:54:29.330118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.125 [2024-07-24 19:54:29.330149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:12.125 [2024-07-24 19:54:29.341571] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f7100 00:25:12.125 [2024-07-24 19:54:29.343381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.125 [2024-07-24 19:54:29.343409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:12.125 [2024-07-24 19:54:29.350426] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f4f40 00:25:12.125 [2024-07-24 19:54:29.351266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.125 [2024-07-24 19:54:29.351307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:12.125 [2024-07-24 19:54:29.363834] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fac10 00:25:12.125 [2024-07-24 19:54:29.364813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.125 [2024-07-24 19:54:29.364843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:12.125 [2024-07-24 19:54:29.375935] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e5220 00:25:12.125 [2024-07-24 19:54:29.376922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.125 [2024-07-24 19:54:29.376952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:12.125 [2024-07-24 19:54:29.389273] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190eb328 00:25:12.125 [2024-07-24 19:54:29.390449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.125 [2024-07-24 19:54:29.390475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:12.125 [2024-07-24 19:54:29.402656] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f3e60 00:25:12.125 [2024-07-24 19:54:29.403958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.126 [2024-07-24 19:54:29.403989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:12.126 [2024-07-24 19:54:29.416053] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e2c28 00:25:12.126 [2024-07-24 19:54:29.417590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.126 [2024-07-24 19:54:29.417621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:12.126 [2024-07-24 19:54:29.429442] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e6300 00:25:12.126 [2024-07-24 19:54:29.431127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.126 [2024-07-24 19:54:29.431158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:12.126 [2024-07-24 19:54:29.442865] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190ee5c8 00:25:12.126 [2024-07-24 19:54:29.444702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.126 [2024-07-24 19:54:29.444732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:12.126 [2024-07-24 19:54:29.456196] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f7538 00:25:12.126 [2024-07-24 19:54:29.458201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.126 [2024-07-24 19:54:29.458232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:12.126 [2024-07-24 19:54:29.465310] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fb480 00:25:12.126 [2024-07-24 19:54:29.466114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.126 [2024-07-24 19:54:29.466144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:12.126 [2024-07-24 19:54:29.477343] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e9e10 00:25:12.126 [2024-07-24 19:54:29.478150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.126 [2024-07-24 19:54:29.478179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:12.126 [2024-07-24 19:54:29.490677] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e84c0 00:25:12.126 [2024-07-24 19:54:29.491705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.126 [2024-07-24 19:54:29.491736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:12.126 [2024-07-24 19:54:29.504108] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f0788 00:25:12.384 [2024-07-24 19:54:29.505296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.384 [2024-07-24 19:54:29.505323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:12.384 [2024-07-24 19:54:29.517466] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e5ec8 00:25:12.384 [2024-07-24 19:54:29.518801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.384 [2024-07-24 19:54:29.518832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:12.384 [2024-07-24 19:54:29.530881] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190ebb98 00:25:12.384 [2024-07-24 19:54:29.532439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.384 [2024-07-24 19:54:29.532467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:12.384 [2024-07-24 19:54:29.544310] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190ee190 00:25:12.384 [2024-07-24 19:54:29.545959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.384 [2024-07-24 19:54:29.545989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:12.384 [2024-07-24 19:54:29.557555] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e7c50 00:25:12.384 [2024-07-24 19:54:29.559432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.384 [2024-07-24 19:54:29.559460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:12.384 [2024-07-24 19:54:29.570897] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f6020 00:25:12.384 [2024-07-24 19:54:29.572891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.384 [2024-07-24 19:54:29.572922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:12.384 [2024-07-24 19:54:29.580001] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190feb58 00:25:12.384 [2024-07-24 19:54:29.580824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.384 [2024-07-24 19:54:29.580855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:12.384 [2024-07-24 19:54:29.592141] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f20d8 00:25:12.384 [2024-07-24 19:54:29.592949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.384 [2024-07-24 19:54:29.592986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:12.384 [2024-07-24 19:54:29.605549] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e5220 00:25:12.384 [2024-07-24 19:54:29.606542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.384 [2024-07-24 19:54:29.606576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:12.384 [2024-07-24 19:54:29.618925] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190ed920 00:25:12.384 [2024-07-24 19:54:29.620075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.384 [2024-07-24 19:54:29.620106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:12.384 [2024-07-24 19:54:29.632256] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fac10 00:25:12.384 [2024-07-24 19:54:29.633574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.384 [2024-07-24 19:54:29.633601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:12.384 [2024-07-24 19:54:29.644266] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e2c28 00:25:12.384 [2024-07-24 19:54:29.645069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.384 [2024-07-24 19:54:29.645100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:12.384 [2024-07-24 19:54:29.657155] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190ea680 00:25:12.384 [2024-07-24 19:54:29.657819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.385 [2024-07-24 19:54:29.657850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:12.385 [2024-07-24 19:54:29.671801] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e23b8 00:25:12.385 [2024-07-24 19:54:29.673486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.385 [2024-07-24 19:54:29.673514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:12.385 [2024-07-24 19:54:29.685306] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190ef6a8 00:25:12.385 [2024-07-24 19:54:29.687160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.385 [2024-07-24 19:54:29.687191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:12.385 [2024-07-24 19:54:29.698733] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f6458 00:25:12.385 [2024-07-24 19:54:29.700783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:6832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.385 [2024-07-24 19:54:29.700824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:12.385 [2024-07-24 19:54:29.707812] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190eb328 00:25:12.385 [2024-07-24 19:54:29.708626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.385 [2024-07-24 19:54:29.708657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:12.385 [2024-07-24 19:54:29.721305] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190ef6a8 00:25:12.385 [2024-07-24 19:54:29.722336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.385 [2024-07-24 19:54:29.722364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:12.385 [2024-07-24 19:54:29.734691] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e84c0 00:25:12.385 [2024-07-24 19:54:29.735843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.385 [2024-07-24 19:54:29.735874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:12.385 [2024-07-24 19:54:29.749270] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f4f40 00:25:12.385 [2024-07-24 19:54:29.751077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.385 [2024-07-24 19:54:29.751109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:12.385 [2024-07-24 19:54:29.762608] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e4140 00:25:12.644 [2024-07-24 19:54:29.764645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.644 [2024-07-24 19:54:29.764676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:12.644 [2024-07-24 19:54:29.771614] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e1f80 00:25:12.644 [2024-07-24 19:54:29.772447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.644 [2024-07-24 19:54:29.772474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:12.644 [2024-07-24 19:54:29.785003] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fc560 00:25:12.644 [2024-07-24 19:54:29.785998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.644 [2024-07-24 19:54:29.786029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:12.644 [2024-07-24 19:54:29.798046] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f6cc8 00:25:12.644 [2024-07-24 19:54:29.799040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.644 [2024-07-24 19:54:29.799071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:12.644 [2024-07-24 19:54:29.811235] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190df550 00:25:12.644 [2024-07-24 19:54:29.812443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.644 [2024-07-24 19:54:29.812470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:12.644 [2024-07-24 19:54:29.823385] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e5ec8 00:25:12.644 [2024-07-24 19:54:29.824534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.644 [2024-07-24 19:54:29.824560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:12.644 [2024-07-24 19:54:29.836763] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fb8b8 00:25:12.644 [2024-07-24 19:54:29.838052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.644 [2024-07-24 19:54:29.838082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:12.644 [2024-07-24 19:54:29.850132] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f8e88 00:25:12.644 [2024-07-24 19:54:29.851602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.644 [2024-07-24 19:54:29.851629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:12.644 [2024-07-24 19:54:29.863569] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e7c50 00:25:12.644 [2024-07-24 19:54:29.865240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.644 [2024-07-24 19:54:29.865291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:12.644 [2024-07-24 19:54:29.876925] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fc128 00:25:12.644 [2024-07-24 19:54:29.878739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.644 [2024-07-24 19:54:29.878769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:12.644 [2024-07-24 19:54:29.890311] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f4b08 00:25:12.644 [2024-07-24 19:54:29.892309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.644 [2024-07-24 19:54:29.892337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:12.644 [2024-07-24 19:54:29.899455] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190dfdc0 00:25:12.644 [2024-07-24 19:54:29.900273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.644 [2024-07-24 19:54:29.900314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:12.644 [2024-07-24 19:54:29.912850] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e2c28 00:25:12.644 [2024-07-24 19:54:29.913822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.644 [2024-07-24 19:54:29.913853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:12.644 [2024-07-24 19:54:29.924941] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190de038 00:25:12.644 [2024-07-24 19:54:29.925933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.644 [2024-07-24 19:54:29.925969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:12.644 [2024-07-24 19:54:29.938471] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fc560 00:25:12.645 [2024-07-24 19:54:29.939632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.645 [2024-07-24 19:54:29.939663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:12.645 [2024-07-24 19:54:29.951888] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e5ec8 00:25:12.645 [2024-07-24 19:54:29.953205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.645 [2024-07-24 19:54:29.953234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:12.645 [2024-07-24 19:54:29.965203] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190fe720 00:25:12.645 [2024-07-24 19:54:29.966694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.645 [2024-07-24 19:54:29.966725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:12.645 [2024-07-24 19:54:29.978531] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190dfdc0 00:25:12.645 [2024-07-24 19:54:29.980176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.645 [2024-07-24 19:54:29.980207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:12.645 [2024-07-24 19:54:29.991870] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e3498 00:25:12.645 [2024-07-24 19:54:29.993659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.645 [2024-07-24 19:54:29.993690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:12.645 [2024-07-24 19:54:30.004809] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f46d0 00:25:12.645 [2024-07-24 19:54:30.006670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.645 [2024-07-24 19:54:30.006713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:12.645 [2024-07-24 19:54:30.013108] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190ebfd0 00:25:12.645 [2024-07-24 19:54:30.013885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.645 [2024-07-24 19:54:30.013912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:12.903 [2024-07-24 19:54:30.030417] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190ebfd0 00:25:12.903 [2024-07-24 19:54:30.032381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.903 [2024-07-24 19:54:30.032419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.903 [2024-07-24 19:54:30.041735] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f46d0 00:25:12.903 [2024-07-24 19:54:30.042694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.903 [2024-07-24 19:54:30.042728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:12.903 [2024-07-24 19:54:30.055325] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190dfdc0 00:25:12.903 [2024-07-24 19:54:30.056488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.903 [2024-07-24 19:54:30.056518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:12.903 [2024-07-24 19:54:30.068837] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190f0350 00:25:12.903 [2024-07-24 19:54:30.070113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.903 [2024-07-24 19:54:30.070144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:12.903 [2024-07-24 19:54:30.082292] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190e1f80 00:25:12.903 [2024-07-24 19:54:30.083727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.903 [2024-07-24 19:54:30.083758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:12.903 [2024-07-24 19:54:30.095694] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190ef270 00:25:12.903 [2024-07-24 19:54:30.097428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.903 [2024-07-24 19:54:30.097456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:12.903 [2024-07-24 19:54:30.107151] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190df118 00:25:12.903 [2024-07-24 19:54:30.108120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.903 [2024-07-24 19:54:30.108152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:12.903 [2024-07-24 19:54:30.120358] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98600) with pdu=0x2000190eee38 00:25:12.903 [2024-07-24 19:54:30.121518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:20268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.903 [2024-07-24 19:54:30.121546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:12.903 00:25:12.903 Latency(us) 00:25:12.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.903 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:12.903 nvme0n1 : 2.00 19924.91 77.83 0.00 0.00 6416.74 3301.07 16602.45 00:25:12.903 =================================================================================================================== 00:25:12.903 Total : 19924.91 77.83 0.00 0.00 6416.74 3301.07 16602.45 00:25:12.903 0 00:25:12.903 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:12.903 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:12.903 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:12.903 | .driver_specific 00:25:12.903 | .nvme_error 00:25:12.903 | .status_code 00:25:12.903 | .command_transient_transport_error' 00:25:12.903 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:13.161 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 156 > 0 )) 00:25:13.161 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1278098 00:25:13.161 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' -z 1278098 ']' 00:25:13.161 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # kill -0 1278098 00:25:13.161 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # uname 00:25:13.161 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:25:13.161 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1278098 00:25:13.161 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:25:13.161 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:25:13.161 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1278098' 00:25:13.161 killing process with pid 1278098 00:25:13.161 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # kill 1278098 00:25:13.161 Received shutdown signal, test time was about 2.000000 seconds 00:25:13.161 00:25:13.161 Latency(us) 00:25:13.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.161 =================================================================================================================== 00:25:13.161 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.161 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@975 -- # wait 1278098 00:25:13.419 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:13.419 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:13.419 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:13.419 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:13.419 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:13.419 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1278502 00:25:13.419 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:13.419 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1278502 /var/tmp/bperf.sock 00:25:13.419 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # '[' -z 1278502 ']' 00:25:13.419 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:13.419 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local max_retries=100 00:25:13.419 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:13.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:13.419 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@841 -- # xtrace_disable 00:25:13.419 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.419 [2024-07-24 19:54:30.717286] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:25:13.419 [2024-07-24 19:54:30.717362] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278502 ] 00:25:13.419 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:13.419 Zero copy mechanism will not be used. 00:25:13.419 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.419 [2024-07-24 19:54:30.781129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.677 [2024-07-24 19:54:30.891722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.677 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:25:13.677 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@865 -- # return 0 00:25:13.677 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:13.677 19:54:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:13.934 19:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:13.934 19:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:13.934 19:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.934 19:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:13.934 19:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.934 19:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:14.501 nvme0n1 00:25:14.501 19:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:14.501 19:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:14.501 19:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:14.501 19:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:14.501 19:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:14.501 19:54:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:14.501 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:14.501 Zero copy mechanism will not be used. 00:25:14.501 Running I/O for 2 seconds... 00:25:14.501 [2024-07-24 19:54:31.780882] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.501 [2024-07-24 19:54:31.781320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-07-24 19:54:31.781371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.501 [2024-07-24 19:54:31.787520] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.501 [2024-07-24 19:54:31.787926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-07-24 19:54:31.787960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.501 [2024-07-24 19:54:31.794521] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.501 [2024-07-24 19:54:31.794864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-07-24 19:54:31.794894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.501 [2024-07-24 19:54:31.801312] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.501 [2024-07-24 19:54:31.801655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-07-24 19:54:31.801685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.501 [2024-07-24 19:54:31.808041] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.501 [2024-07-24 19:54:31.808370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-07-24 19:54:31.808399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.501 [2024-07-24 19:54:31.814662] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.501 [2024-07-24 19:54:31.814969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-07-24 19:54:31.815012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.501 [2024-07-24 19:54:31.821021] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.501 [2024-07-24 19:54:31.821350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-07-24 19:54:31.821379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.501 [2024-07-24 19:54:31.827707] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.501 [2024-07-24 19:54:31.828029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-07-24 19:54:31.828057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.501 [2024-07-24 19:54:31.834576] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.501 [2024-07-24 19:54:31.834909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-07-24 19:54:31.834937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.501 [2024-07-24 19:54:31.841328] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.501 [2024-07-24 19:54:31.841643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-07-24 19:54:31.841671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.501 [2024-07-24 19:54:31.847929] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.501 [2024-07-24 19:54:31.848259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-07-24 19:54:31.848294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.501 [2024-07-24 19:54:31.854186] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.501 [2024-07-24 19:54:31.854529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-07-24 19:54:31.854566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.501 [2024-07-24 19:54:31.860337] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.501 [2024-07-24 19:54:31.860443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-07-24 19:54:31.860472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.501 [2024-07-24 19:54:31.866756] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.501 [2024-07-24 19:54:31.867160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-07-24 19:54:31.867189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.501 [2024-07-24 19:54:31.873139] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.501 [2024-07-24 19:54:31.873440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-07-24 19:54:31.873469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.759 [2024-07-24 19:54:31.879918] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.759 [2024-07-24 19:54:31.880347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.759 [2024-07-24 19:54:31.880377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.759 [2024-07-24 19:54:31.886927] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.759 [2024-07-24 19:54:31.887324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.759 [2024-07-24 19:54:31.887353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.759 [2024-07-24 19:54:31.893791] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.759 [2024-07-24 19:54:31.894094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.759 [2024-07-24 19:54:31.894122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.759 [2024-07-24 19:54:31.900384] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.759 [2024-07-24 19:54:31.900700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.759 [2024-07-24 19:54:31.900729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.759 [2024-07-24 19:54:31.906529] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.759 [2024-07-24 19:54:31.906861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.759 [2024-07-24 19:54:31.906890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.759 [2024-07-24 19:54:31.913489] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.759 [2024-07-24 19:54:31.913837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.759 [2024-07-24 19:54:31.913866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.759 [2024-07-24 19:54:31.920189] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.759 [2024-07-24 19:54:31.920284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.759 [2024-07-24 19:54:31.920313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.759 [2024-07-24 19:54:31.926730] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.759 [2024-07-24 19:54:31.927069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.759 [2024-07-24 19:54:31.927097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.759 [2024-07-24 19:54:31.933586] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.759 [2024-07-24 19:54:31.933906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.759 [2024-07-24 19:54:31.933934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.759 [2024-07-24 19:54:31.940509] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:31.940837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:31.940865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:31.947579] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:31.947903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:31.947932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:31.954538] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:31.954912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:31.954942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:31.960954] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:31.961259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:31.961300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:31.968011] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:31.968345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:31.968373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:31.974697] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:31.974991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:31.975018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:31.981843] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:31.982164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:31.982191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:31.989578] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:31.989873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:31.989900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:31.997560] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:31.997872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:31.997900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.005929] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.006266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.006298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.013377] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.013702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.013730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.020061] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.020444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.020486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.027105] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.027455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.027485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.033433] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.033759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.033788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.040795] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.041098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.041127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.048857] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.049159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.049188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.057026] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.057343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.057372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.064214] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.064383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.064411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.072366] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.072680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.072708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.078859] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.079195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.079224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.085412] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.085725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.085754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.092163] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.092471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.092500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.098841] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.099142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.099171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.105200] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.105519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.105547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.112389] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.112707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.112735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.119767] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.120099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.120142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.127906] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.760 [2024-07-24 19:54:32.128220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.760 [2024-07-24 19:54:32.128257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:14.760 [2024-07-24 19:54:32.136330] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:14.761 [2024-07-24 19:54:32.136651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.761 [2024-07-24 19:54:32.136695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.143522] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.143853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.143883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.151723] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.152033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.152067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.159903] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.160292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.160335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.168058] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.168396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.168441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.175042] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.175380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.175409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.181759] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.182072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.182113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.188851] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.189174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.189202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.195353] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.195691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.195719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.202513] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.202845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.202888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.210007] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.210347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.210376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.217427] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.217744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.217772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.225085] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.225422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.225451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.232673] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.233001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.233030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.240592] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.240912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.240941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.248730] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.249038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.249066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.257150] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.257461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.257490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.265495] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.265820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.265848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.273703] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.017 [2024-07-24 19:54:32.274027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.017 [2024-07-24 19:54:32.274054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.017 [2024-07-24 19:54:32.282197] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.018 [2024-07-24 19:54:32.282547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.018 [2024-07-24 19:54:32.282576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.018 [2024-07-24 19:54:32.291611] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.018 [2024-07-24 19:54:32.291936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.018 [2024-07-24 19:54:32.291966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.018 [2024-07-24 19:54:32.298952] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.018 [2024-07-24 19:54:32.299263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.018 [2024-07-24 19:54:32.299301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.018 [2024-07-24 19:54:32.306990] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.018 [2024-07-24 19:54:32.307304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.018 [2024-07-24 19:54:32.307333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.018 [2024-07-24 19:54:32.313027] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.018 [2024-07-24 19:54:32.313339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.018 [2024-07-24 19:54:32.313368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.018 [2024-07-24 19:54:32.319481] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.018 [2024-07-24 19:54:32.319783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.018 [2024-07-24 19:54:32.319813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.018 [2024-07-24 19:54:32.326436] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.018 [2024-07-24 19:54:32.326809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.018 [2024-07-24 19:54:32.326838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.018 [2024-07-24 19:54:32.333761] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.018 [2024-07-24 19:54:32.334104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.018 [2024-07-24 19:54:32.334132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.018 [2024-07-24 19:54:32.342308] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.018 [2024-07-24 19:54:32.342655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.018 [2024-07-24 19:54:32.342700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.018 [2024-07-24 19:54:32.350394] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.018 [2024-07-24 19:54:32.350714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.018 [2024-07-24 19:54:32.350766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.018 [2024-07-24 19:54:32.358816] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.018 [2024-07-24 19:54:32.359119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.018 [2024-07-24 19:54:32.359148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.018 [2024-07-24 19:54:32.367765] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.018 [2024-07-24 19:54:32.368114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.018 [2024-07-24 19:54:32.368158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.018 [2024-07-24 19:54:32.376494] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.018 [2024-07-24 19:54:32.376812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.018 [2024-07-24 19:54:32.376841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.018 [2024-07-24 19:54:32.384739] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.018 [2024-07-24 19:54:32.385042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.018 [2024-07-24 19:54:32.385071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.018 [2024-07-24 19:54:32.392809] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.018 [2024-07-24 19:54:32.393141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.018 [2024-07-24 19:54:32.393170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.275 [2024-07-24 19:54:32.401035] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.275 [2024-07-24 19:54:32.401362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.275 [2024-07-24 19:54:32.401391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.275 [2024-07-24 19:54:32.409181] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.275 [2024-07-24 19:54:32.409503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.275 [2024-07-24 19:54:32.409532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.275 [2024-07-24 19:54:32.416522] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.275 [2024-07-24 19:54:32.416935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.275 [2024-07-24 19:54:32.416977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.275 [2024-07-24 19:54:32.424301] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.275 [2024-07-24 19:54:32.424616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.275 [2024-07-24 19:54:32.424645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.275 [2024-07-24 19:54:32.432728] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.433077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.433106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.440995] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.441306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.441335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.449384] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.449733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.449777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.457648] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.457953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.457982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.465769] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.466131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.466161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.474133] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.474461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.474504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.481263] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.481596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.481625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.489365] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.489711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.489762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.496996] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.497305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.497334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.503722] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.504029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.504057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.510983] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.511294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.511323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.518327] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.518647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.518677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.526356] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.526676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.526705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.534322] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.534642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.534687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.541983] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.542296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.542324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.549398] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.549702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.549731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.555501] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.555857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.555886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.562533] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.562871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.562914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.568406] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.568710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.568739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.574053] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.574202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.574230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.580848] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.581151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.581180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.588839] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.589158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.589202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.596084] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.596398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.596427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.602484] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.602786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.602815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.608751] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.609053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.609082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.615575] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.615668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.615697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.623263] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.623569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.623598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.631053] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.631364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.631392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.637649] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.637964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.637993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.644068] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.644180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.644209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.276 [2024-07-24 19:54:32.650921] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.276 [2024-07-24 19:54:32.651225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.276 [2024-07-24 19:54:32.651264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.658307] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.658627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.658656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.665731] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.666059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.666087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.672614] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.672960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.673011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.679276] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.679611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.679641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.685958] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.686297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.686326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.692439] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.692753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.692782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.699144] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.699452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.699482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.705667] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.706001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.706029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.711663] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.711969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.711998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.717968] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.718310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.718339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.724393] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.724696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.724724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.730691] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.730800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.730828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.737308] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.737614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.737642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.743945] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.744255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.744284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.750903] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.751254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.751283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.757255] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.757560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.757603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.763818] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.764120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.764148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.769897] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.769998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.770025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.776014] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.776311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.776340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.782187] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.782475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.782503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.788119] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.788400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.788429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.793963] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.794234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.794270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.799976] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.800256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.800285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.805934] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.533 [2024-07-24 19:54:32.806205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.533 [2024-07-24 19:54:32.806234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.533 [2024-07-24 19:54:32.812018] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.812295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.812324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.817805] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.818083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.818112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.823948] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.824221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.824259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.830297] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.830571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.830599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.835940] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.836289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.836325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.841999] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.842278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.842307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.847710] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.847982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.848010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.853132] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.853410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.853439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.859154] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.859435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.859464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.865307] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.865581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.865610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.871431] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.871705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.871733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.877212] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.877506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.877536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.883116] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.883399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.883429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.889425] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.889707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.889736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.895333] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.895606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.895635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.901493] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.901766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.901795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.534 [2024-07-24 19:54:32.907621] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.534 [2024-07-24 19:54:32.907891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.534 [2024-07-24 19:54:32.907919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:32.913656] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:32.913931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:32.913960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:32.919107] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:32.919460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:32.919489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:32.925282] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:32.925583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:32.925612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:32.931362] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:32.931634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:32.931663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:32.937164] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:32.937440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:32.937469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:32.943098] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:32.943381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:32.943409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:32.949310] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:32.949585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:32.949614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:32.955404] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:32.955677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:32.955706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:32.961516] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:32.961789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:32.961818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:32.967312] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:32.967585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:32.967614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:32.973358] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:32.973631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:32.973660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:32.979748] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:32.980020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:32.980049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:32.985915] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:32.986186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:32.986215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:32.992337] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:32.992612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:32.992647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:32.998548] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:32.998824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:32.998853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:33.004825] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:33.005107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:33.005136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:33.011145] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:33.011427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:33.011456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:33.017257] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:33.017545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:33.017574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:33.023219] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:33.023506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:33.023535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.791 [2024-07-24 19:54:33.029110] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.791 [2024-07-24 19:54:33.029394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.791 [2024-07-24 19:54:33.029423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.035481] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.035756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.035785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.041284] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.041560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.041588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.047064] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.047346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.047376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.052812] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.053085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.053114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.058915] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.059239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.059275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.065302] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.065490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.065519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.072622] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.072982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.073028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.080048] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.080358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.080387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.086330] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.086617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.086660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.092251] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.092527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.092556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.098547] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.098821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.098857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.103976] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.104254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.104284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.109264] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.109550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.109579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.115071] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.115354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.115385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.121195] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.121479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.121508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.127412] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.127685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.127714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.133521] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.133795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.133824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.138728] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.139000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.139029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.144274] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.144553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.144582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.149832] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.150113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.150142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.155532] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.155803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.155832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.160969] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.161239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.161274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:15.792 [2024-07-24 19:54:33.167130] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:15.792 [2024-07-24 19:54:33.167410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.792 [2024-07-24 19:54:33.167440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.172554] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.172829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.049 [2024-07-24 19:54:33.172858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.178095] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.178375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.049 [2024-07-24 19:54:33.178405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.184389] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.184681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.049 [2024-07-24 19:54:33.184711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.191713] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.192111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.049 [2024-07-24 19:54:33.192140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.198996] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.199375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.049 [2024-07-24 19:54:33.199404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.206565] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.206926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.049 [2024-07-24 19:54:33.206955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.212853] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.213126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.049 [2024-07-24 19:54:33.213155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.219149] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.219427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.049 [2024-07-24 19:54:33.219457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.225512] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.225800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.049 [2024-07-24 19:54:33.225828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.232633] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.233021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.049 [2024-07-24 19:54:33.233049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.239755] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.240027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.049 [2024-07-24 19:54:33.240055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.246331] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.246610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.049 [2024-07-24 19:54:33.246639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.252482] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.252768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.049 [2024-07-24 19:54:33.252796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.258616] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.258889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.049 [2024-07-24 19:54:33.258940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.264765] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.265038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.049 [2024-07-24 19:54:33.265067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.049 [2024-07-24 19:54:33.271043] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.049 [2024-07-24 19:54:33.271319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.271347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.277483] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.277779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.277808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.283639] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.283942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.283970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.289837] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.290169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.290197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.296205] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.296541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.296570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.302781] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.303051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.303079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.308393] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.308695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.308724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.314962] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.315291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.315319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.321729] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.322123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.322152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.329064] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.329515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.329542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.336092] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.336373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.336403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.341974] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.342254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.342283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.348421] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.348706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.348735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.354655] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.354968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.354995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.360295] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.360568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.360596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.366606] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.366879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.366908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.372865] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.373178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.373206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.379339] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.379611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.379655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.385874] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.386173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.386202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.392413] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.392684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.392713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.398712] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.399012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.399040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.405061] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.405342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.405371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.411376] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.050 [2024-07-24 19:54:33.411678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.050 [2024-07-24 19:54:33.411706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.050 [2024-07-24 19:54:33.417873] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.051 [2024-07-24 19:54:33.418147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.051 [2024-07-24 19:54:33.418176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.051 [2024-07-24 19:54:33.424139] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.051 [2024-07-24 19:54:33.424447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.051 [2024-07-24 19:54:33.424496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.430041] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.430353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.430382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.436066] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.436379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.436408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.441698] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.441970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.441999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.447621] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.447898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.447926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.453853] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.454124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.454153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.460674] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.460950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.460979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.468797] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.469138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.469167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.477040] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.477481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.477510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.485268] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.485557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.485600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.493134] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.493515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.493559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.501517] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.501906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.501934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.509732] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.510014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.510042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.517199] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.517481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.517509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.524816] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.525119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.525149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.532960] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.533253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.533281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.540686] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.541029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.541057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.548267] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.548622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.548655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.556172] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.556451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.556481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.563538] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.563908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.563937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.569407] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.569679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.569708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.576058] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.576352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.576380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.581746] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.582035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.582063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.587651] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.587925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.587953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.593074] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.593365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.593393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.599545] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.599850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.599878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.605347] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.307 [2024-07-24 19:54:33.605645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.307 [2024-07-24 19:54:33.605672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.307 [2024-07-24 19:54:33.611435] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.308 [2024-07-24 19:54:33.611740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.308 [2024-07-24 19:54:33.611766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.308 [2024-07-24 19:54:33.618958] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.308 [2024-07-24 19:54:33.619311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.308 [2024-07-24 19:54:33.619354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.308 [2024-07-24 19:54:33.626022] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.308 [2024-07-24 19:54:33.626368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.308 [2024-07-24 19:54:33.626396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.308 [2024-07-24 19:54:33.634088] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.308 [2024-07-24 19:54:33.634430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.308 [2024-07-24 19:54:33.634459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.308 [2024-07-24 19:54:33.640557] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.308 [2024-07-24 19:54:33.640841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.308 [2024-07-24 19:54:33.640869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.308 [2024-07-24 19:54:33.646516] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.308 [2024-07-24 19:54:33.646803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.308 [2024-07-24 19:54:33.646846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.308 [2024-07-24 19:54:33.653022] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.308 [2024-07-24 19:54:33.653315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.308 [2024-07-24 19:54:33.653343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.308 [2024-07-24 19:54:33.659015] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.308 [2024-07-24 19:54:33.659339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.308 [2024-07-24 19:54:33.659367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.308 [2024-07-24 19:54:33.665053] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.308 [2024-07-24 19:54:33.665349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.308 [2024-07-24 19:54:33.665393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.308 [2024-07-24 19:54:33.671721] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.308 [2024-07-24 19:54:33.672005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.308 [2024-07-24 19:54:33.672033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.308 [2024-07-24 19:54:33.677358] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.308 [2024-07-24 19:54:33.677644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.308 [2024-07-24 19:54:33.677671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.308 [2024-07-24 19:54:33.683580] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.308 [2024-07-24 19:54:33.683853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.308 [2024-07-24 19:54:33.683881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.565 [2024-07-24 19:54:33.689389] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.565 [2024-07-24 19:54:33.689717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.565 [2024-07-24 19:54:33.689745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.565 [2024-07-24 19:54:33.695124] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.565 [2024-07-24 19:54:33.695417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.565 [2024-07-24 19:54:33.695445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.565 [2024-07-24 19:54:33.701500] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.565 [2024-07-24 19:54:33.701810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.565 [2024-07-24 19:54:33.701839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.565 [2024-07-24 19:54:33.707992] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.565 [2024-07-24 19:54:33.708315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.565 [2024-07-24 19:54:33.708343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.565 [2024-07-24 19:54:33.713895] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.565 [2024-07-24 19:54:33.714208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.565 [2024-07-24 19:54:33.714249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.565 [2024-07-24 19:54:33.719747] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.565 [2024-07-24 19:54:33.720044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.565 [2024-07-24 19:54:33.720073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.565 [2024-07-24 19:54:33.725383] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.565 [2024-07-24 19:54:33.725656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.565 [2024-07-24 19:54:33.725685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.565 [2024-07-24 19:54:33.731288] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.565 [2024-07-24 19:54:33.731565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.565 [2024-07-24 19:54:33.731593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.565 [2024-07-24 19:54:33.737825] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.565 [2024-07-24 19:54:33.738097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.565 [2024-07-24 19:54:33.738139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.565 [2024-07-24 19:54:33.744466] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.565 [2024-07-24 19:54:33.744738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.565 [2024-07-24 19:54:33.744766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.565 [2024-07-24 19:54:33.752157] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.565 [2024-07-24 19:54:33.752519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.565 [2024-07-24 19:54:33.752547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:16.565 [2024-07-24 19:54:33.759635] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.566 [2024-07-24 19:54:33.759948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.566 [2024-07-24 19:54:33.759992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:16.566 [2024-07-24 19:54:33.768017] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.566 [2024-07-24 19:54:33.768328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.566 [2024-07-24 19:54:33.768357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:16.566 [2024-07-24 19:54:33.775653] tcp.c:2207:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c98940) with pdu=0x2000190fef90 00:25:16.566 [2024-07-24 19:54:33.776015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.566 [2024-07-24 19:54:33.776044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:16.566 00:25:16.566 Latency(us) 00:25:16.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.566 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:16.566 nvme0n1 : 2.00 4585.93 573.24 0.00 0.00 3480.17 2439.40 9272.13 00:25:16.566 =================================================================================================================== 00:25:16.566 Total : 4585.93 573.24 0.00 0.00 3480.17 2439.40 9272.13 00:25:16.566 0 00:25:16.566 19:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:16.566 19:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:16.566 19:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:16.566 19:54:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:16.566 | .driver_specific 00:25:16.566 | .nvme_error 00:25:16.566 | .status_code 00:25:16.566 | .command_transient_transport_error' 00:25:16.823 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 296 > 0 )) 00:25:16.823 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1278502 00:25:16.823 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' -z 1278502 ']' 00:25:16.823 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # kill -0 1278502 00:25:16.823 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # uname 00:25:16.823 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:25:16.823 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1278502 00:25:16.823 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:25:16.823 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:25:16.823 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1278502' 00:25:16.823 killing process with pid 1278502 00:25:16.823 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # kill 1278502 00:25:16.823 Received shutdown signal, test time was about 2.000000 seconds 00:25:16.823 00:25:16.823 Latency(us) 00:25:16.823 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.823 =================================================================================================================== 00:25:16.823 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:16.823 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@975 -- # wait 1278502 00:25:17.079 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1277136 00:25:17.080 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' -z 1277136 ']' 00:25:17.080 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # kill -0 1277136 00:25:17.080 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # uname 00:25:17.080 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:25:17.080 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1277136 00:25:17.080 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:25:17.080 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:25:17.080 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1277136' 00:25:17.080 killing process with pid 1277136 00:25:17.080 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # kill 1277136 00:25:17.080 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@975 -- # wait 1277136 00:25:17.337 00:25:17.337 real 0m15.622s 00:25:17.337 user 0m31.300s 00:25:17.337 sys 0m4.043s 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # xtrace_disable 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:17.337 ************************************ 00:25:17.337 END TEST nvmf_digest_error 00:25:17.337 ************************************ 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # nvmfcleanup 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:17.337 rmmod nvme_tcp 00:25:17.337 rmmod nvme_fabrics 00:25:17.337 rmmod nvme_keyring 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # '[' -n 1277136 ']' 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@494 -- # killprocess 1277136 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@951 -- # '[' -z 1277136 ']' 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@955 -- # kill -0 1277136 00:25:17.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 955: kill: (1277136) - No such process 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@978 -- # echo 'Process with pid 1277136 is not found' 00:25:17.337 Process with pid 1277136 is not found 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@282 -- # remove_spdk_ns 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.337 19:54:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:25:19.864 00:25:19.864 real 0m35.236s 00:25:19.864 user 1m2.630s 00:25:19.864 sys 0m9.628s 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # xtrace_disable 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:19.864 ************************************ 00:25:19.864 END TEST nvmf_digest 00:25:19.864 ************************************ 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.864 ************************************ 00:25:19.864 START TEST nvmf_bdevperf 00:25:19.864 ************************************ 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:19.864 * Looking for test storage... 00:25:19.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:19.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@452 -- # prepare_net_devs 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # local -g is_hw=no 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # remove_spdk_ns 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # xtrace_disable 00:25:19.864 19:54:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # pci_devs=() 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -a pci_devs 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # pci_net_devs=() 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # pci_drivers=() 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -A pci_drivers 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@299 -- # net_devs=() 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@299 -- # local -ga net_devs 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@300 -- # e810=() 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@300 -- # local -ga e810 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # x722=() 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # local -ga x722 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # mlx=() 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # local -ga mlx 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.763 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:21.764 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:21.764 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # [[ up == up ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:21.764 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # [[ up == up ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:21.764 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # is_hw=yes 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:25:21.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:25:21.764 00:25:21.764 --- 10.0.0.2 ping statistics --- 00:25:21.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.764 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:25:21.764 00:25:21.764 --- 10.0.0.1 ping statistics --- 00:25:21.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.764 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@426 -- # return 0 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@725 -- # xtrace_disable 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # nvmfpid=1280966 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # waitforlisten 1280966 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@832 -- # '[' -z 1280966 ']' 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local max_retries=100 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@841 -- # xtrace_disable 00:25:21.764 19:54:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:21.764 [2024-07-24 19:54:38.951062] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:25:21.764 [2024-07-24 19:54:38.951170] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.764 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.764 [2024-07-24 19:54:39.018848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:21.764 [2024-07-24 19:54:39.141374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.764 [2024-07-24 19:54:39.141420] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.764 [2024-07-24 19:54:39.141434] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.765 [2024-07-24 19:54:39.141446] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.765 [2024-07-24 19:54:39.141457] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.765 [2024-07-24 19:54:39.141509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.765 [2024-07-24 19:54:39.141567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.765 [2024-07-24 19:54:39.141570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.698 19:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:25:22.698 19:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@865 -- # return 0 00:25:22.698 19:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:25:22.698 19:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@731 -- # xtrace_disable 00:25:22.698 19:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.698 19:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.698 19:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:22.698 19:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:22.698 19:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.698 [2024-07-24 19:54:39.965945] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.698 19:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:22.698 19:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:22.698 19:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:22.698 19:54:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.698 Malloc0 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:22.698 [2024-07-24 19:54:40.023975] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@536 -- # config=() 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@536 -- # local subsystem config 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:25:22.698 { 00:25:22.698 "params": { 00:25:22.698 "name": "Nvme$subsystem", 00:25:22.698 "trtype": "$TEST_TRANSPORT", 00:25:22.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:22.698 "adrfam": "ipv4", 00:25:22.698 "trsvcid": "$NVMF_PORT", 00:25:22.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:22.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:22.698 "hdgst": ${hdgst:-false}, 00:25:22.698 "ddgst": ${ddgst:-false} 00:25:22.698 }, 00:25:22.698 "method": "bdev_nvme_attach_controller" 00:25:22.698 } 00:25:22.698 EOF 00:25:22.698 )") 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # cat 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # jq . 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@561 -- # IFS=, 00:25:22.698 19:54:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:25:22.698 "params": { 00:25:22.698 "name": "Nvme1", 00:25:22.698 "trtype": "tcp", 00:25:22.698 "traddr": "10.0.0.2", 00:25:22.698 "adrfam": "ipv4", 00:25:22.698 "trsvcid": "4420", 00:25:22.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:22.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:22.698 "hdgst": false, 00:25:22.698 "ddgst": false 00:25:22.698 }, 00:25:22.698 "method": "bdev_nvme_attach_controller" 00:25:22.698 }' 00:25:22.698 [2024-07-24 19:54:40.074190] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:25:22.698 [2024-07-24 19:54:40.074310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281126 ] 00:25:22.955 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.955 [2024-07-24 19:54:40.135989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.955 [2024-07-24 19:54:40.249739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.213 Running I/O for 1 seconds... 00:25:24.587 00:25:24.587 Latency(us) 00:25:24.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.587 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:24.587 Verification LBA range: start 0x0 length 0x4000 00:25:24.587 Nvme1n1 : 1.01 8548.78 33.39 0.00 0.00 14907.65 2888.44 15825.73 00:25:24.587 =================================================================================================================== 00:25:24.587 Total : 8548.78 33.39 0.00 0.00 14907.65 2888.44 15825.73 00:25:24.587 19:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1281268 00:25:24.587 19:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:24.587 19:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:24.587 19:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:24.587 19:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@536 -- # config=() 00:25:24.587 19:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@536 -- # local subsystem config 00:25:24.587 19:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:25:24.587 19:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:25:24.587 { 00:25:24.587 "params": { 00:25:24.587 "name": "Nvme$subsystem", 00:25:24.587 "trtype": "$TEST_TRANSPORT", 00:25:24.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:24.587 "adrfam": "ipv4", 00:25:24.587 "trsvcid": "$NVMF_PORT", 00:25:24.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:24.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:24.587 "hdgst": ${hdgst:-false}, 00:25:24.587 "ddgst": ${ddgst:-false} 00:25:24.587 }, 00:25:24.587 "method": "bdev_nvme_attach_controller" 00:25:24.587 } 00:25:24.587 EOF 00:25:24.587 )") 00:25:24.587 19:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # cat 00:25:24.587 19:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # jq . 00:25:24.587 19:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@561 -- # IFS=, 00:25:24.587 19:54:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:25:24.587 "params": { 00:25:24.587 "name": "Nvme1", 00:25:24.587 "trtype": "tcp", 00:25:24.587 "traddr": "10.0.0.2", 00:25:24.587 "adrfam": "ipv4", 00:25:24.587 "trsvcid": "4420", 00:25:24.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:24.587 "hdgst": false, 00:25:24.587 "ddgst": false 00:25:24.587 }, 00:25:24.587 "method": "bdev_nvme_attach_controller" 00:25:24.587 }' 00:25:24.587 [2024-07-24 19:54:41.869030] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:25:24.587 [2024-07-24 19:54:41.869109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281268 ] 00:25:24.587 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.587 [2024-07-24 19:54:41.927818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.844 [2024-07-24 19:54:42.038371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:25.102 Running I/O for 15 seconds... 00:25:27.628 19:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1280966 00:25:27.628 19:54:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:27.628 [2024-07-24 19:54:44.836198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.628 [2024-07-24 19:54:44.836257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.628 [2024-07-24 19:54:44.836325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.628 [2024-07-24 19:54:44.836357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.628 [2024-07-24 19:54:44.836388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.628 [2024-07-24 19:54:44.836418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.628 [2024-07-24 19:54:44.836448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.628 [2024-07-24 19:54:44.836486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.628 [2024-07-24 19:54:44.836534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.628 [2024-07-24 19:54:44.836566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.628 [2024-07-24 19:54:44.836611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.628 [2024-07-24 19:54:44.836645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.628 [2024-07-24 19:54:44.836677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.628 [2024-07-24 19:54:44.836709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.628 [2024-07-24 19:54:44.836740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.628 [2024-07-24 19:54:44.836771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.628 [2024-07-24 19:54:44.836804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.628 [2024-07-24 19:54:44.836834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.628 [2024-07-24 19:54:44.836851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.628 [2024-07-24 19:54:44.836866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.836882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.836897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.836917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.836933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.836951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.836966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.836983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.836997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.629 [2024-07-24 19:54:44.837254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.629 [2024-07-24 19:54:44.837302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.629 [2024-07-24 19:54:44.837335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.629 [2024-07-24 19:54:44.837364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.629 [2024-07-24 19:54:44.837391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.629 [2024-07-24 19:54:44.837419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.629 [2024-07-24 19:54:44.837447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.837968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.837985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.838000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.838017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.629 [2024-07-24 19:54:44.838031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.629 [2024-07-24 19:54:44.838048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:27.630 [2024-07-24 19:54:44.838632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.838664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.838695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.838727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.838758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.838789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.838820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.838851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.838882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.838912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.838943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.838978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.838995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.839010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.839027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.839042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.839058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.839073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.839089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.839104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.839120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.839135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.839151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.839166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.839183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.839198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.839214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.839229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.630 [2024-07-24 19:54:44.839252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.630 [2024-07-24 19:54:44.839269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.839976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.839992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.840007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.840023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.840037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.840054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.840069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.840085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.840099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.840116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.840131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.840147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.840162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.840182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.840197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.840213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.840228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.840251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.840268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.840300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.840314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.840329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.840342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.631 [2024-07-24 19:54:44.840357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.631 [2024-07-24 19:54:44.840370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.632 [2024-07-24 19:54:44.840385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.632 [2024-07-24 19:54:44.840398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.632 [2024-07-24 19:54:44.840412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x105a830 is same with the state(6) to be set 00:25:27.632 [2024-07-24 19:54:44.840428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:27.632 [2024-07-24 19:54:44.840439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:27.632 [2024-07-24 19:54:44.840451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39496 len:8 PRP1 0x0 PRP2 0x0 00:25:27.632 [2024-07-24 19:54:44.840463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.632 [2024-07-24 19:54:44.840523] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x105a830 was disconnected and freed. reset controller. 00:25:27.632 [2024-07-24 19:54:44.844363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.632 [2024-07-24 19:54:44.844431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.632 [2024-07-24 19:54:44.845257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.632 [2024-07-24 19:54:44.845310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.632 [2024-07-24 19:54:44.845327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.632 [2024-07-24 19:54:44.845542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.632 [2024-07-24 19:54:44.845811] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.632 [2024-07-24 19:54:44.845842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.632 [2024-07-24 19:54:44.845860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.632 [2024-07-24 19:54:44.849478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.632 [2024-07-24 19:54:44.858630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.632 [2024-07-24 19:54:44.859045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.632 [2024-07-24 19:54:44.859077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.632 [2024-07-24 19:54:44.859095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.632 [2024-07-24 19:54:44.859346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.632 [2024-07-24 19:54:44.859589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.632 [2024-07-24 19:54:44.859611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.632 [2024-07-24 19:54:44.859626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.632 [2024-07-24 19:54:44.863196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.632 [2024-07-24 19:54:44.872493] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.632 [2024-07-24 19:54:44.872911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.632 [2024-07-24 19:54:44.872943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.632 [2024-07-24 19:54:44.872961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.632 [2024-07-24 19:54:44.873199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.632 [2024-07-24 19:54:44.873451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.632 [2024-07-24 19:54:44.873475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.632 [2024-07-24 19:54:44.873490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.632 [2024-07-24 19:54:44.877066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.632 [2024-07-24 19:54:44.886386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.632 [2024-07-24 19:54:44.886804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.632 [2024-07-24 19:54:44.886836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.632 [2024-07-24 19:54:44.886853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.632 [2024-07-24 19:54:44.887091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.632 [2024-07-24 19:54:44.887345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.632 [2024-07-24 19:54:44.887368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.632 [2024-07-24 19:54:44.887383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.632 [2024-07-24 19:54:44.890951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.632 [2024-07-24 19:54:44.900248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.632 [2024-07-24 19:54:44.900679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.632 [2024-07-24 19:54:44.900706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.632 [2024-07-24 19:54:44.900722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.632 [2024-07-24 19:54:44.900954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.632 [2024-07-24 19:54:44.901197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.632 [2024-07-24 19:54:44.901219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.632 [2024-07-24 19:54:44.901234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.632 [2024-07-24 19:54:44.904822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.632 [2024-07-24 19:54:44.914115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.632 [2024-07-24 19:54:44.914539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.632 [2024-07-24 19:54:44.914570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.632 [2024-07-24 19:54:44.914587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.632 [2024-07-24 19:54:44.914825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.632 [2024-07-24 19:54:44.915067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.632 [2024-07-24 19:54:44.915089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.632 [2024-07-24 19:54:44.915104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.632 [2024-07-24 19:54:44.918683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.632 [2024-07-24 19:54:44.927976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.632 [2024-07-24 19:54:44.928368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.632 [2024-07-24 19:54:44.928399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.632 [2024-07-24 19:54:44.928417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.632 [2024-07-24 19:54:44.928654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.632 [2024-07-24 19:54:44.928896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.632 [2024-07-24 19:54:44.928918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.632 [2024-07-24 19:54:44.928933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.632 [2024-07-24 19:54:44.932512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.632 [2024-07-24 19:54:44.942014] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.632 [2024-07-24 19:54:44.942411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.632 [2024-07-24 19:54:44.942443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.632 [2024-07-24 19:54:44.942462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.632 [2024-07-24 19:54:44.942706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.632 [2024-07-24 19:54:44.942949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.632 [2024-07-24 19:54:44.942972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.632 [2024-07-24 19:54:44.942986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.632 [2024-07-24 19:54:44.946571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.632 [2024-07-24 19:54:44.955875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.632 [2024-07-24 19:54:44.956307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.632 [2024-07-24 19:54:44.956335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.632 [2024-07-24 19:54:44.956352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.632 [2024-07-24 19:54:44.956594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.632 [2024-07-24 19:54:44.956846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.632 [2024-07-24 19:54:44.956869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.633 [2024-07-24 19:54:44.956884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.633 [2024-07-24 19:54:44.960464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.633 [2024-07-24 19:54:44.969757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.633 [2024-07-24 19:54:44.970139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.633 [2024-07-24 19:54:44.970171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.633 [2024-07-24 19:54:44.970188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.633 [2024-07-24 19:54:44.970439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.633 [2024-07-24 19:54:44.970681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.633 [2024-07-24 19:54:44.970704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.633 [2024-07-24 19:54:44.970719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.633 [2024-07-24 19:54:44.974295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.633 [2024-07-24 19:54:44.983591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.633 [2024-07-24 19:54:44.984010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.633 [2024-07-24 19:54:44.984036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.633 [2024-07-24 19:54:44.984052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.633 [2024-07-24 19:54:44.984299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.633 [2024-07-24 19:54:44.984560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.633 [2024-07-24 19:54:44.984583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.633 [2024-07-24 19:54:44.984604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.633 [2024-07-24 19:54:44.988171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.633 [2024-07-24 19:54:44.997468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.633 [2024-07-24 19:54:44.997857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.633 [2024-07-24 19:54:44.997888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.633 [2024-07-24 19:54:44.997905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.633 [2024-07-24 19:54:44.998144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.633 [2024-07-24 19:54:44.998397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.633 [2024-07-24 19:54:44.998420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.633 [2024-07-24 19:54:44.998435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.633 [2024-07-24 19:54:45.002007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.892 [2024-07-24 19:54:45.011308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.892 [2024-07-24 19:54:45.011725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.892 [2024-07-24 19:54:45.011756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.892 [2024-07-24 19:54:45.011773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.892 [2024-07-24 19:54:45.012012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.892 [2024-07-24 19:54:45.012264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.892 [2024-07-24 19:54:45.012287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.892 [2024-07-24 19:54:45.012302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.892 [2024-07-24 19:54:45.015871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.892 [2024-07-24 19:54:45.025184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.892 [2024-07-24 19:54:45.025592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.892 [2024-07-24 19:54:45.025623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.892 [2024-07-24 19:54:45.025642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.892 [2024-07-24 19:54:45.025881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.892 [2024-07-24 19:54:45.026123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.892 [2024-07-24 19:54:45.026146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.892 [2024-07-24 19:54:45.026160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.892 [2024-07-24 19:54:45.029784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.892 [2024-07-24 19:54:45.039136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.892 [2024-07-24 19:54:45.039569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.892 [2024-07-24 19:54:45.039606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.892 [2024-07-24 19:54:45.039625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.892 [2024-07-24 19:54:45.039863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.892 [2024-07-24 19:54:45.040105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.892 [2024-07-24 19:54:45.040128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.892 [2024-07-24 19:54:45.040143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.892 [2024-07-24 19:54:45.043725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.892 [2024-07-24 19:54:45.053001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.892 [2024-07-24 19:54:45.053464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.892 [2024-07-24 19:54:45.053492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.892 [2024-07-24 19:54:45.053508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.892 [2024-07-24 19:54:45.053770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.892 [2024-07-24 19:54:45.054012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.892 [2024-07-24 19:54:45.054035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.892 [2024-07-24 19:54:45.054050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.892 [2024-07-24 19:54:45.057641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.892 [2024-07-24 19:54:45.066938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.892 [2024-07-24 19:54:45.067326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.892 [2024-07-24 19:54:45.067357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.892 [2024-07-24 19:54:45.067375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.892 [2024-07-24 19:54:45.067614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.892 [2024-07-24 19:54:45.067855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.892 [2024-07-24 19:54:45.067878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.892 [2024-07-24 19:54:45.067893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.892 [2024-07-24 19:54:45.071473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.892 [2024-07-24 19:54:45.080796] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.892 [2024-07-24 19:54:45.081184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.892 [2024-07-24 19:54:45.081216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.892 [2024-07-24 19:54:45.081233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.892 [2024-07-24 19:54:45.081493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.892 [2024-07-24 19:54:45.081753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.892 [2024-07-24 19:54:45.081776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.892 [2024-07-24 19:54:45.081791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.892 [2024-07-24 19:54:45.085326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.892 [2024-07-24 19:54:45.094643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.892 [2024-07-24 19:54:45.095134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.892 [2024-07-24 19:54:45.095201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.892 [2024-07-24 19:54:45.095220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.892 [2024-07-24 19:54:45.095467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.892 [2024-07-24 19:54:45.095697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.892 [2024-07-24 19:54:45.095718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.892 [2024-07-24 19:54:45.095731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.892 [2024-07-24 19:54:45.099299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.892 [2024-07-24 19:54:45.108707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.892 [2024-07-24 19:54:45.109120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.892 [2024-07-24 19:54:45.109151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.892 [2024-07-24 19:54:45.109169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.892 [2024-07-24 19:54:45.109419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.892 [2024-07-24 19:54:45.109669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.892 [2024-07-24 19:54:45.109692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.892 [2024-07-24 19:54:45.109708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.892 [2024-07-24 19:54:45.113322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.892 [2024-07-24 19:54:45.122677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.892 [2024-07-24 19:54:45.123058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.892 [2024-07-24 19:54:45.123089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.892 [2024-07-24 19:54:45.123107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.892 [2024-07-24 19:54:45.123355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.892 [2024-07-24 19:54:45.123598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.892 [2024-07-24 19:54:45.123621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.892 [2024-07-24 19:54:45.123636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.892 [2024-07-24 19:54:45.127212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.892 [2024-07-24 19:54:45.136514] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.892 [2024-07-24 19:54:45.136949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.892 [2024-07-24 19:54:45.136975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.892 [2024-07-24 19:54:45.136990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.893 [2024-07-24 19:54:45.137204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.893 [2024-07-24 19:54:45.137469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.893 [2024-07-24 19:54:45.137492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.893 [2024-07-24 19:54:45.137507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.893 [2024-07-24 19:54:45.141080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.893 [2024-07-24 19:54:45.150392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.893 [2024-07-24 19:54:45.150778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.893 [2024-07-24 19:54:45.150809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.893 [2024-07-24 19:54:45.150826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.893 [2024-07-24 19:54:45.151065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.893 [2024-07-24 19:54:45.151321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.893 [2024-07-24 19:54:45.151345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.893 [2024-07-24 19:54:45.151360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.893 [2024-07-24 19:54:45.154948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.893 [2024-07-24 19:54:45.164285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.893 [2024-07-24 19:54:45.164701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.893 [2024-07-24 19:54:45.164732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.893 [2024-07-24 19:54:45.164750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.893 [2024-07-24 19:54:45.164987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.893 [2024-07-24 19:54:45.165229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.893 [2024-07-24 19:54:45.165264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.893 [2024-07-24 19:54:45.165281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.893 [2024-07-24 19:54:45.168855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.893 [2024-07-24 19:54:45.178167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.893 [2024-07-24 19:54:45.178597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.893 [2024-07-24 19:54:45.178624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.893 [2024-07-24 19:54:45.178644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.893 [2024-07-24 19:54:45.178884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.893 [2024-07-24 19:54:45.179126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.893 [2024-07-24 19:54:45.179149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.893 [2024-07-24 19:54:45.179164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.893 [2024-07-24 19:54:45.182746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.893 [2024-07-24 19:54:45.192055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.893 [2024-07-24 19:54:45.192449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.893 [2024-07-24 19:54:45.192481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.893 [2024-07-24 19:54:45.192499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.893 [2024-07-24 19:54:45.192737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.893 [2024-07-24 19:54:45.192978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.893 [2024-07-24 19:54:45.193001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.893 [2024-07-24 19:54:45.193016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.893 [2024-07-24 19:54:45.196599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.893 [2024-07-24 19:54:45.205903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.893 [2024-07-24 19:54:45.206293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.893 [2024-07-24 19:54:45.206324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.893 [2024-07-24 19:54:45.206342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.893 [2024-07-24 19:54:45.206580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.893 [2024-07-24 19:54:45.206823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.893 [2024-07-24 19:54:45.206845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.893 [2024-07-24 19:54:45.206860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.893 [2024-07-24 19:54:45.210442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.893 [2024-07-24 19:54:45.219741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.893 [2024-07-24 19:54:45.220152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.893 [2024-07-24 19:54:45.220183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.893 [2024-07-24 19:54:45.220200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.893 [2024-07-24 19:54:45.220448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.893 [2024-07-24 19:54:45.220691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.893 [2024-07-24 19:54:45.220719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.893 [2024-07-24 19:54:45.220735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.893 [2024-07-24 19:54:45.224316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.893 [2024-07-24 19:54:45.233616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.893 [2024-07-24 19:54:45.234000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.893 [2024-07-24 19:54:45.234030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.893 [2024-07-24 19:54:45.234048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.893 [2024-07-24 19:54:45.234298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.893 [2024-07-24 19:54:45.234540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.893 [2024-07-24 19:54:45.234563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.893 [2024-07-24 19:54:45.234578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.893 [2024-07-24 19:54:45.238147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.893 [2024-07-24 19:54:45.247468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.893 [2024-07-24 19:54:45.247878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.893 [2024-07-24 19:54:45.247908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.893 [2024-07-24 19:54:45.247926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.893 [2024-07-24 19:54:45.248164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.893 [2024-07-24 19:54:45.248417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.893 [2024-07-24 19:54:45.248441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.893 [2024-07-24 19:54:45.248456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.893 [2024-07-24 19:54:45.252025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.893 [2024-07-24 19:54:45.261345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.893 [2024-07-24 19:54:45.261762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.893 [2024-07-24 19:54:45.261789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:27.893 [2024-07-24 19:54:45.261804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:27.893 [2024-07-24 19:54:45.262038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:27.893 [2024-07-24 19:54:45.262292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:27.893 [2024-07-24 19:54:45.262315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:27.893 [2024-07-24 19:54:45.262330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.893 [2024-07-24 19:54:45.265896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.152 [2024-07-24 19:54:45.275204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.152 [2024-07-24 19:54:45.275596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.152 [2024-07-24 19:54:45.275627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.152 [2024-07-24 19:54:45.275645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.152 [2024-07-24 19:54:45.275882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.152 [2024-07-24 19:54:45.276124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.152 [2024-07-24 19:54:45.276146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.152 [2024-07-24 19:54:45.276161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.152 [2024-07-24 19:54:45.279743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.152 [2024-07-24 19:54:45.289055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.152 [2024-07-24 19:54:45.289446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.152 [2024-07-24 19:54:45.289476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.152 [2024-07-24 19:54:45.289494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.152 [2024-07-24 19:54:45.289732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.152 [2024-07-24 19:54:45.289974] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.152 [2024-07-24 19:54:45.289996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.152 [2024-07-24 19:54:45.290012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.152 [2024-07-24 19:54:45.293590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.152 [2024-07-24 19:54:45.303098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.152 [2024-07-24 19:54:45.303517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.152 [2024-07-24 19:54:45.303548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.152 [2024-07-24 19:54:45.303565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.153 [2024-07-24 19:54:45.303803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.153 [2024-07-24 19:54:45.304045] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.153 [2024-07-24 19:54:45.304067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.153 [2024-07-24 19:54:45.304082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.153 [2024-07-24 19:54:45.307662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.153 [2024-07-24 19:54:45.316961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.153 [2024-07-24 19:54:45.317326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.153 [2024-07-24 19:54:45.317358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.153 [2024-07-24 19:54:45.317376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.153 [2024-07-24 19:54:45.317621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.153 [2024-07-24 19:54:45.317863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.153 [2024-07-24 19:54:45.317886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.153 [2024-07-24 19:54:45.317900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.153 [2024-07-24 19:54:45.321479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.153 [2024-07-24 19:54:45.330991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.153 [2024-07-24 19:54:45.331411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.153 [2024-07-24 19:54:45.331442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.153 [2024-07-24 19:54:45.331459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.153 [2024-07-24 19:54:45.331697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.153 [2024-07-24 19:54:45.331940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.153 [2024-07-24 19:54:45.331963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.153 [2024-07-24 19:54:45.331978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.153 [2024-07-24 19:54:45.335558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.153 [2024-07-24 19:54:45.344942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.153 [2024-07-24 19:54:45.345344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.153 [2024-07-24 19:54:45.345374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.153 [2024-07-24 19:54:45.345392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.153 [2024-07-24 19:54:45.345630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.153 [2024-07-24 19:54:45.345872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.153 [2024-07-24 19:54:45.345896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.153 [2024-07-24 19:54:45.345911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.153 [2024-07-24 19:54:45.349489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.153 [2024-07-24 19:54:45.358809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.153 [2024-07-24 19:54:45.359250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.153 [2024-07-24 19:54:45.359278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.153 [2024-07-24 19:54:45.359294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.153 [2024-07-24 19:54:45.359535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.153 [2024-07-24 19:54:45.359777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.153 [2024-07-24 19:54:45.359799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.153 [2024-07-24 19:54:45.359820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.153 [2024-07-24 19:54:45.363400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.153 [2024-07-24 19:54:45.372699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.153 [2024-07-24 19:54:45.373120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.153 [2024-07-24 19:54:45.373151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.153 [2024-07-24 19:54:45.373168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.153 [2024-07-24 19:54:45.373417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.153 [2024-07-24 19:54:45.373660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.153 [2024-07-24 19:54:45.373682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.153 [2024-07-24 19:54:45.373697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.153 [2024-07-24 19:54:45.377277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.153 [2024-07-24 19:54:45.386575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.153 [2024-07-24 19:54:45.386959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.153 [2024-07-24 19:54:45.386990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.153 [2024-07-24 19:54:45.387007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.153 [2024-07-24 19:54:45.387255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.153 [2024-07-24 19:54:45.387497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.153 [2024-07-24 19:54:45.387520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.153 [2024-07-24 19:54:45.387535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.153 [2024-07-24 19:54:45.391103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.153 [2024-07-24 19:54:45.400613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.153 [2024-07-24 19:54:45.400994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.153 [2024-07-24 19:54:45.401024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.153 [2024-07-24 19:54:45.401042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.153 [2024-07-24 19:54:45.401291] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.153 [2024-07-24 19:54:45.401534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.153 [2024-07-24 19:54:45.401557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.153 [2024-07-24 19:54:45.401572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.153 [2024-07-24 19:54:45.405141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.153 [2024-07-24 19:54:45.414463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.153 [2024-07-24 19:54:45.414845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.153 [2024-07-24 19:54:45.414882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.153 [2024-07-24 19:54:45.414901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.153 [2024-07-24 19:54:45.415138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.153 [2024-07-24 19:54:45.415391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.153 [2024-07-24 19:54:45.415415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.153 [2024-07-24 19:54:45.415429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.153 [2024-07-24 19:54:45.419016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.153 [2024-07-24 19:54:45.428330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.153 [2024-07-24 19:54:45.428719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.153 [2024-07-24 19:54:45.428745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.153 [2024-07-24 19:54:45.428760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.153 [2024-07-24 19:54:45.428983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.153 [2024-07-24 19:54:45.429225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.153 [2024-07-24 19:54:45.429258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.154 [2024-07-24 19:54:45.429275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.154 [2024-07-24 19:54:45.432847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.154 [2024-07-24 19:54:45.442377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.154 [2024-07-24 19:54:45.442758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.154 [2024-07-24 19:54:45.442789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.154 [2024-07-24 19:54:45.442807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.154 [2024-07-24 19:54:45.443044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.154 [2024-07-24 19:54:45.443297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.154 [2024-07-24 19:54:45.443320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.154 [2024-07-24 19:54:45.443335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.154 [2024-07-24 19:54:45.446906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.154 [2024-07-24 19:54:45.456230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.154 [2024-07-24 19:54:45.456617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.154 [2024-07-24 19:54:45.456647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.154 [2024-07-24 19:54:45.456665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.154 [2024-07-24 19:54:45.456903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.154 [2024-07-24 19:54:45.457150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.154 [2024-07-24 19:54:45.457174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.154 [2024-07-24 19:54:45.457189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.154 [2024-07-24 19:54:45.460776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.154 [2024-07-24 19:54:45.470088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.154 [2024-07-24 19:54:45.470465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.154 [2024-07-24 19:54:45.470496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.154 [2024-07-24 19:54:45.470513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.154 [2024-07-24 19:54:45.470751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.154 [2024-07-24 19:54:45.470993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.154 [2024-07-24 19:54:45.471016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.154 [2024-07-24 19:54:45.471031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.154 [2024-07-24 19:54:45.474616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.154 [2024-07-24 19:54:45.483945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.154 [2024-07-24 19:54:45.484353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.154 [2024-07-24 19:54:45.484386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.154 [2024-07-24 19:54:45.484404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.154 [2024-07-24 19:54:45.484642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.154 [2024-07-24 19:54:45.484884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.154 [2024-07-24 19:54:45.484907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.154 [2024-07-24 19:54:45.484922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.154 [2024-07-24 19:54:45.488508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.154 [2024-07-24 19:54:45.497865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.154 [2024-07-24 19:54:45.498278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.154 [2024-07-24 19:54:45.498317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.154 [2024-07-24 19:54:45.498334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.154 [2024-07-24 19:54:45.498564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.154 [2024-07-24 19:54:45.498818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.154 [2024-07-24 19:54:45.498841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.154 [2024-07-24 19:54:45.498856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.154 [2024-07-24 19:54:45.502447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.154 [2024-07-24 19:54:45.511780] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.154 [2024-07-24 19:54:45.512188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.154 [2024-07-24 19:54:45.512220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.154 [2024-07-24 19:54:45.512237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.154 [2024-07-24 19:54:45.512485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.154 [2024-07-24 19:54:45.512727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.154 [2024-07-24 19:54:45.512750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.154 [2024-07-24 19:54:45.512764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.154 [2024-07-24 19:54:45.516353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.154 [2024-07-24 19:54:45.525672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.154 [2024-07-24 19:54:45.526087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.154 [2024-07-24 19:54:45.526118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.154 [2024-07-24 19:54:45.526136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.154 [2024-07-24 19:54:45.526386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.154 [2024-07-24 19:54:45.526629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.154 [2024-07-24 19:54:45.526652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.154 [2024-07-24 19:54:45.526667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.154 [2024-07-24 19:54:45.530250] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.413 [2024-07-24 19:54:45.539579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.413 [2024-07-24 19:54:45.540058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.413 [2024-07-24 19:54:45.540095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.413 [2024-07-24 19:54:45.540113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.413 [2024-07-24 19:54:45.540360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.413 [2024-07-24 19:54:45.540603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.413 [2024-07-24 19:54:45.540626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.413 [2024-07-24 19:54:45.540641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.413 [2024-07-24 19:54:45.544219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.413 [2024-07-24 19:54:45.553547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.413 [2024-07-24 19:54:45.553976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.413 [2024-07-24 19:54:45.554026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.413 [2024-07-24 19:54:45.554051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.413 [2024-07-24 19:54:45.554302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.413 [2024-07-24 19:54:45.554555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.413 [2024-07-24 19:54:45.554579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.413 [2024-07-24 19:54:45.554594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.413 [2024-07-24 19:54:45.558170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.413 [2024-07-24 19:54:45.567499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.413 [2024-07-24 19:54:45.568014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.413 [2024-07-24 19:54:45.568069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.413 [2024-07-24 19:54:45.568086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.413 [2024-07-24 19:54:45.568335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.413 [2024-07-24 19:54:45.568578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.413 [2024-07-24 19:54:45.568601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.413 [2024-07-24 19:54:45.568615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.413 [2024-07-24 19:54:45.572190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.413 [2024-07-24 19:54:45.581517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.413 [2024-07-24 19:54:45.581969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.413 [2024-07-24 19:54:45.582023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.413 [2024-07-24 19:54:45.582040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.414 [2024-07-24 19:54:45.582289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.414 [2024-07-24 19:54:45.582542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.414 [2024-07-24 19:54:45.582565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.414 [2024-07-24 19:54:45.582580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.414 [2024-07-24 19:54:45.586163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.414 [2024-07-24 19:54:45.595490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.414 [2024-07-24 19:54:45.595966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.414 [2024-07-24 19:54:45.595997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.414 [2024-07-24 19:54:45.596015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.414 [2024-07-24 19:54:45.596262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.414 [2024-07-24 19:54:45.596505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.414 [2024-07-24 19:54:45.596543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.414 [2024-07-24 19:54:45.596559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.414 [2024-07-24 19:54:45.600140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.414 [2024-07-24 19:54:45.609466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.414 [2024-07-24 19:54:45.609947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.414 [2024-07-24 19:54:45.609978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.414 [2024-07-24 19:54:45.609995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.414 [2024-07-24 19:54:45.610233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.414 [2024-07-24 19:54:45.610486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.414 [2024-07-24 19:54:45.610509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.414 [2024-07-24 19:54:45.610523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.414 [2024-07-24 19:54:45.614097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.414 [2024-07-24 19:54:45.623413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.414 [2024-07-24 19:54:45.623922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.414 [2024-07-24 19:54:45.623976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.414 [2024-07-24 19:54:45.623993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.414 [2024-07-24 19:54:45.624231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.414 [2024-07-24 19:54:45.624484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.414 [2024-07-24 19:54:45.624507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.414 [2024-07-24 19:54:45.624522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.414 [2024-07-24 19:54:45.628096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.414 [2024-07-24 19:54:45.637406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.414 [2024-07-24 19:54:45.637867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.414 [2024-07-24 19:54:45.637919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.414 [2024-07-24 19:54:45.637937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.414 [2024-07-24 19:54:45.638175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.414 [2024-07-24 19:54:45.638428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.414 [2024-07-24 19:54:45.638451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.414 [2024-07-24 19:54:45.638466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.414 [2024-07-24 19:54:45.642036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.414 [2024-07-24 19:54:45.651352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.414 [2024-07-24 19:54:45.651865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.414 [2024-07-24 19:54:45.651916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.414 [2024-07-24 19:54:45.651933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.414 [2024-07-24 19:54:45.652171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.414 [2024-07-24 19:54:45.652423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.414 [2024-07-24 19:54:45.652456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.414 [2024-07-24 19:54:45.652471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.414 [2024-07-24 19:54:45.656068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.414 [2024-07-24 19:54:45.665378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.414 [2024-07-24 19:54:45.665788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.414 [2024-07-24 19:54:45.665819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.414 [2024-07-24 19:54:45.665837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.414 [2024-07-24 19:54:45.666075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.414 [2024-07-24 19:54:45.666329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.414 [2024-07-24 19:54:45.666353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.414 [2024-07-24 19:54:45.666369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.414 [2024-07-24 19:54:45.669939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.414 [2024-07-24 19:54:45.679251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.414 [2024-07-24 19:54:45.679662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.414 [2024-07-24 19:54:45.679693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.414 [2024-07-24 19:54:45.679710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.414 [2024-07-24 19:54:45.679948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.414 [2024-07-24 19:54:45.680189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.414 [2024-07-24 19:54:45.680211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.414 [2024-07-24 19:54:45.680226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.414 [2024-07-24 19:54:45.683807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.414 [2024-07-24 19:54:45.693131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.414 [2024-07-24 19:54:45.693558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.414 [2024-07-24 19:54:45.693590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.414 [2024-07-24 19:54:45.693608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.414 [2024-07-24 19:54:45.693852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.414 [2024-07-24 19:54:45.694094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.414 [2024-07-24 19:54:45.694117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.414 [2024-07-24 19:54:45.694132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.414 [2024-07-24 19:54:45.697712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.415 [2024-07-24 19:54:45.707019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.415 [2024-07-24 19:54:45.707435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.415 [2024-07-24 19:54:45.707467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.415 [2024-07-24 19:54:45.707484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.415 [2024-07-24 19:54:45.707722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.415 [2024-07-24 19:54:45.707964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.415 [2024-07-24 19:54:45.707987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.415 [2024-07-24 19:54:45.708001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.415 [2024-07-24 19:54:45.711583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.415 [2024-07-24 19:54:45.720882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.415 [2024-07-24 19:54:45.721267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.415 [2024-07-24 19:54:45.721297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.415 [2024-07-24 19:54:45.721315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.415 [2024-07-24 19:54:45.721553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.415 [2024-07-24 19:54:45.721794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.415 [2024-07-24 19:54:45.721817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.415 [2024-07-24 19:54:45.721832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.415 [2024-07-24 19:54:45.725414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.415 [2024-07-24 19:54:45.734922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.415 [2024-07-24 19:54:45.735330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.415 [2024-07-24 19:54:45.735360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.415 [2024-07-24 19:54:45.735378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.415 [2024-07-24 19:54:45.735616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.415 [2024-07-24 19:54:45.735857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.415 [2024-07-24 19:54:45.735879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.415 [2024-07-24 19:54:45.735900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.415 [2024-07-24 19:54:45.739483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.415 [2024-07-24 19:54:45.748781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.415 [2024-07-24 19:54:45.749159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.415 [2024-07-24 19:54:45.749190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.415 [2024-07-24 19:54:45.749207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.415 [2024-07-24 19:54:45.749454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.415 [2024-07-24 19:54:45.749697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.415 [2024-07-24 19:54:45.749719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.415 [2024-07-24 19:54:45.749734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.415 [2024-07-24 19:54:45.753315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.415 [2024-07-24 19:54:45.762629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.415 [2024-07-24 19:54:45.763045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.415 [2024-07-24 19:54:45.763076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.415 [2024-07-24 19:54:45.763094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.415 [2024-07-24 19:54:45.763342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.415 [2024-07-24 19:54:45.763585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.415 [2024-07-24 19:54:45.763607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.415 [2024-07-24 19:54:45.763622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.415 [2024-07-24 19:54:45.767191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.415 [2024-07-24 19:54:45.776497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.415 [2024-07-24 19:54:45.776882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.415 [2024-07-24 19:54:45.776914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.415 [2024-07-24 19:54:45.776932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.415 [2024-07-24 19:54:45.777170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.415 [2024-07-24 19:54:45.777427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.415 [2024-07-24 19:54:45.777452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.415 [2024-07-24 19:54:45.777467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.415 [2024-07-24 19:54:45.781036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.415 [2024-07-24 19:54:45.790338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.415 [2024-07-24 19:54:45.790744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.415 [2024-07-24 19:54:45.790779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.415 [2024-07-24 19:54:45.790797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.415 [2024-07-24 19:54:45.791034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.415 [2024-07-24 19:54:45.791289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.415 [2024-07-24 19:54:45.791313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.415 [2024-07-24 19:54:45.791328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.674 [2024-07-24 19:54:45.794898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.674 [2024-07-24 19:54:45.804193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.674 [2024-07-24 19:54:45.804584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.674 [2024-07-24 19:54:45.804616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.674 [2024-07-24 19:54:45.804633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.674 [2024-07-24 19:54:45.804871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.674 [2024-07-24 19:54:45.805113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.674 [2024-07-24 19:54:45.805136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.674 [2024-07-24 19:54:45.805151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.674 [2024-07-24 19:54:45.808731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.674 [2024-07-24 19:54:45.818029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.674 [2024-07-24 19:54:45.818442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.674 [2024-07-24 19:54:45.818473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.674 [2024-07-24 19:54:45.818491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.674 [2024-07-24 19:54:45.818729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.674 [2024-07-24 19:54:45.818970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.674 [2024-07-24 19:54:45.818993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.674 [2024-07-24 19:54:45.819007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.674 [2024-07-24 19:54:45.822590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.674 [2024-07-24 19:54:45.831890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.674 [2024-07-24 19:54:45.832300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.674 [2024-07-24 19:54:45.832332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.674 [2024-07-24 19:54:45.832350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.674 [2024-07-24 19:54:45.832588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.674 [2024-07-24 19:54:45.832836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.674 [2024-07-24 19:54:45.832859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.674 [2024-07-24 19:54:45.832874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.674 [2024-07-24 19:54:45.836456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.674 [2024-07-24 19:54:45.845760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.674 [2024-07-24 19:54:45.846168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.674 [2024-07-24 19:54:45.846199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.674 [2024-07-24 19:54:45.846216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.674 [2024-07-24 19:54:45.846464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.674 [2024-07-24 19:54:45.846707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.674 [2024-07-24 19:54:45.846730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.674 [2024-07-24 19:54:45.846745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.674 [2024-07-24 19:54:45.850322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.674 [2024-07-24 19:54:45.859811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.674 [2024-07-24 19:54:45.860224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.674 [2024-07-24 19:54:45.860263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.674 [2024-07-24 19:54:45.860292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.674 [2024-07-24 19:54:45.860531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.674 [2024-07-24 19:54:45.860772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.674 [2024-07-24 19:54:45.860794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.674 [2024-07-24 19:54:45.860810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.674 [2024-07-24 19:54:45.864387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.674 [2024-07-24 19:54:45.873684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.674 [2024-07-24 19:54:45.874071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.674 [2024-07-24 19:54:45.874102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.674 [2024-07-24 19:54:45.874120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.674 [2024-07-24 19:54:45.874370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.674 [2024-07-24 19:54:45.874612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.675 [2024-07-24 19:54:45.874635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.675 [2024-07-24 19:54:45.874650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.675 [2024-07-24 19:54:45.878233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.675 [2024-07-24 19:54:45.887550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.675 [2024-07-24 19:54:45.887940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.675 [2024-07-24 19:54:45.887971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.675 [2024-07-24 19:54:45.887989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.675 [2024-07-24 19:54:45.888228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.675 [2024-07-24 19:54:45.888481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.675 [2024-07-24 19:54:45.888504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.675 [2024-07-24 19:54:45.888518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.675 [2024-07-24 19:54:45.892089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.675 [2024-07-24 19:54:45.901393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.675 [2024-07-24 19:54:45.901776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.675 [2024-07-24 19:54:45.901807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.675 [2024-07-24 19:54:45.901825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.675 [2024-07-24 19:54:45.902063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.675 [2024-07-24 19:54:45.902315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.675 [2024-07-24 19:54:45.902339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.675 [2024-07-24 19:54:45.902354] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.675 [2024-07-24 19:54:45.905926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.675 [2024-07-24 19:54:45.915224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.675 [2024-07-24 19:54:45.915645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.675 [2024-07-24 19:54:45.915675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.675 [2024-07-24 19:54:45.915693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.675 [2024-07-24 19:54:45.915931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.675 [2024-07-24 19:54:45.916172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.675 [2024-07-24 19:54:45.916195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.675 [2024-07-24 19:54:45.916210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.675 [2024-07-24 19:54:45.919814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.675 [2024-07-24 19:54:45.929116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.675 [2024-07-24 19:54:45.929537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.675 [2024-07-24 19:54:45.929568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.675 [2024-07-24 19:54:45.929591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.675 [2024-07-24 19:54:45.929830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.675 [2024-07-24 19:54:45.930071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.675 [2024-07-24 19:54:45.930094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.675 [2024-07-24 19:54:45.930109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.675 [2024-07-24 19:54:45.933690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.675 [2024-07-24 19:54:45.942993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.675 [2024-07-24 19:54:45.943405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.675 [2024-07-24 19:54:45.943436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.675 [2024-07-24 19:54:45.943453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.675 [2024-07-24 19:54:45.943692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.675 [2024-07-24 19:54:45.943933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.675 [2024-07-24 19:54:45.943956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.675 [2024-07-24 19:54:45.943970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.675 [2024-07-24 19:54:45.947549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.675 [2024-07-24 19:54:45.956867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.675 [2024-07-24 19:54:45.957286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.675 [2024-07-24 19:54:45.957317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.675 [2024-07-24 19:54:45.957335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.675 [2024-07-24 19:54:45.957573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.675 [2024-07-24 19:54:45.957815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.675 [2024-07-24 19:54:45.957838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.675 [2024-07-24 19:54:45.957852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.675 [2024-07-24 19:54:45.961438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.675 [2024-07-24 19:54:45.970739] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.675 [2024-07-24 19:54:45.971158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.675 [2024-07-24 19:54:45.971188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.675 [2024-07-24 19:54:45.971205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.675 [2024-07-24 19:54:45.971453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.675 [2024-07-24 19:54:45.971696] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.675 [2024-07-24 19:54:45.971725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.675 [2024-07-24 19:54:45.971740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.675 [2024-07-24 19:54:45.975318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.675 [2024-07-24 19:54:45.984622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.675 [2024-07-24 19:54:45.985033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.675 [2024-07-24 19:54:45.985064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.675 [2024-07-24 19:54:45.985082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.675 [2024-07-24 19:54:45.985332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.675 [2024-07-24 19:54:45.985574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.675 [2024-07-24 19:54:45.985597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.675 [2024-07-24 19:54:45.985612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.675 [2024-07-24 19:54:45.989181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.675 [2024-07-24 19:54:45.998487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.675 [2024-07-24 19:54:45.998871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.675 [2024-07-24 19:54:45.998902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.675 [2024-07-24 19:54:45.998919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.675 [2024-07-24 19:54:45.999157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.675 [2024-07-24 19:54:45.999410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.675 [2024-07-24 19:54:45.999434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.675 [2024-07-24 19:54:45.999448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.675 [2024-07-24 19:54:46.003019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.675 [2024-07-24 19:54:46.012322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.675 [2024-07-24 19:54:46.012732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.675 [2024-07-24 19:54:46.012763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.675 [2024-07-24 19:54:46.012780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.675 [2024-07-24 19:54:46.013018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.675 [2024-07-24 19:54:46.013271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.676 [2024-07-24 19:54:46.013295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.676 [2024-07-24 19:54:46.013310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.676 [2024-07-24 19:54:46.016877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.676 [2024-07-24 19:54:46.026184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.676 [2024-07-24 19:54:46.026603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.676 [2024-07-24 19:54:46.026634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.676 [2024-07-24 19:54:46.026651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.676 [2024-07-24 19:54:46.026889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.676 [2024-07-24 19:54:46.027131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.676 [2024-07-24 19:54:46.027154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.676 [2024-07-24 19:54:46.027169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.676 [2024-07-24 19:54:46.030753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.676 [2024-07-24 19:54:46.040068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.676 [2024-07-24 19:54:46.040472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.676 [2024-07-24 19:54:46.040503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.676 [2024-07-24 19:54:46.040520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.676 [2024-07-24 19:54:46.040759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.676 [2024-07-24 19:54:46.041001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.676 [2024-07-24 19:54:46.041024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.676 [2024-07-24 19:54:46.041039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.676 [2024-07-24 19:54:46.044621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.935 [2024-07-24 19:54:46.053942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.935 [2024-07-24 19:54:46.054357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.935 [2024-07-24 19:54:46.054388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.935 [2024-07-24 19:54:46.054406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.935 [2024-07-24 19:54:46.054644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.935 [2024-07-24 19:54:46.054886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.935 [2024-07-24 19:54:46.054908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.935 [2024-07-24 19:54:46.054924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.935 [2024-07-24 19:54:46.058519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.935 [2024-07-24 19:54:46.067828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.935 [2024-07-24 19:54:46.068237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.935 [2024-07-24 19:54:46.068277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.935 [2024-07-24 19:54:46.068315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.935 [2024-07-24 19:54:46.068560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.935 [2024-07-24 19:54:46.068802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.935 [2024-07-24 19:54:46.068825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.935 [2024-07-24 19:54:46.068839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.935 [2024-07-24 19:54:46.072420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.935 [2024-07-24 19:54:46.081732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.935 [2024-07-24 19:54:46.082114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.935 [2024-07-24 19:54:46.082146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.935 [2024-07-24 19:54:46.082163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.935 [2024-07-24 19:54:46.082412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.935 [2024-07-24 19:54:46.082655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.935 [2024-07-24 19:54:46.082678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.935 [2024-07-24 19:54:46.082693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.935 [2024-07-24 19:54:46.086291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.935 [2024-07-24 19:54:46.095603] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.935 [2024-07-24 19:54:46.096011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.935 [2024-07-24 19:54:46.096048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.935 [2024-07-24 19:54:46.096065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.935 [2024-07-24 19:54:46.096315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.935 [2024-07-24 19:54:46.096558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.935 [2024-07-24 19:54:46.096580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.935 [2024-07-24 19:54:46.096596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.935 [2024-07-24 19:54:46.100164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.935 [2024-07-24 19:54:46.109483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.935 [2024-07-24 19:54:46.109894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.935 [2024-07-24 19:54:46.109943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.935 [2024-07-24 19:54:46.109961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.935 [2024-07-24 19:54:46.110199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.935 [2024-07-24 19:54:46.110450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.935 [2024-07-24 19:54:46.110474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.935 [2024-07-24 19:54:46.110494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.935 [2024-07-24 19:54:46.114068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.935 [2024-07-24 19:54:46.123387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.935 [2024-07-24 19:54:46.123772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.935 [2024-07-24 19:54:46.123821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.935 [2024-07-24 19:54:46.123840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.936 [2024-07-24 19:54:46.124078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.936 [2024-07-24 19:54:46.124330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.936 [2024-07-24 19:54:46.124359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.936 [2024-07-24 19:54:46.124374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.936 [2024-07-24 19:54:46.127946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.936 [2024-07-24 19:54:46.137275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.936 [2024-07-24 19:54:46.137714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.936 [2024-07-24 19:54:46.137761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.936 [2024-07-24 19:54:46.137778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.936 [2024-07-24 19:54:46.138016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.936 [2024-07-24 19:54:46.138270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.936 [2024-07-24 19:54:46.138294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.936 [2024-07-24 19:54:46.138308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.936 [2024-07-24 19:54:46.141881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.936 [2024-07-24 19:54:46.151182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.936 [2024-07-24 19:54:46.151575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.936 [2024-07-24 19:54:46.151606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.936 [2024-07-24 19:54:46.151623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.936 [2024-07-24 19:54:46.151860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.936 [2024-07-24 19:54:46.152103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.936 [2024-07-24 19:54:46.152126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.936 [2024-07-24 19:54:46.152140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.936 [2024-07-24 19:54:46.155719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.936 [2024-07-24 19:54:46.165027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.936 [2024-07-24 19:54:46.165422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.936 [2024-07-24 19:54:46.165453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.936 [2024-07-24 19:54:46.165470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.936 [2024-07-24 19:54:46.165708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.936 [2024-07-24 19:54:46.165950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.936 [2024-07-24 19:54:46.165973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.936 [2024-07-24 19:54:46.165988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.936 [2024-07-24 19:54:46.169570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.936 [2024-07-24 19:54:46.178873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.936 [2024-07-24 19:54:46.179255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.936 [2024-07-24 19:54:46.179287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.936 [2024-07-24 19:54:46.179305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.936 [2024-07-24 19:54:46.179544] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.936 [2024-07-24 19:54:46.179786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.936 [2024-07-24 19:54:46.179809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.936 [2024-07-24 19:54:46.179824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.936 [2024-07-24 19:54:46.183414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.936 [2024-07-24 19:54:46.192919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.936 [2024-07-24 19:54:46.193329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.936 [2024-07-24 19:54:46.193361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.936 [2024-07-24 19:54:46.193379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.936 [2024-07-24 19:54:46.193618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.936 [2024-07-24 19:54:46.193860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.936 [2024-07-24 19:54:46.193883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.936 [2024-07-24 19:54:46.193898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.936 [2024-07-24 19:54:46.197478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.936 [2024-07-24 19:54:46.206772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.936 [2024-07-24 19:54:46.207180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.936 [2024-07-24 19:54:46.207210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.936 [2024-07-24 19:54:46.207228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.936 [2024-07-24 19:54:46.207474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.936 [2024-07-24 19:54:46.207723] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.936 [2024-07-24 19:54:46.207747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.936 [2024-07-24 19:54:46.207762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.936 [2024-07-24 19:54:46.211335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.936 [2024-07-24 19:54:46.220636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.936 [2024-07-24 19:54:46.221024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.936 [2024-07-24 19:54:46.221055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.936 [2024-07-24 19:54:46.221073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.936 [2024-07-24 19:54:46.221319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.936 [2024-07-24 19:54:46.221570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.936 [2024-07-24 19:54:46.221593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.936 [2024-07-24 19:54:46.221608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.936 [2024-07-24 19:54:46.225182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.936 [2024-07-24 19:54:46.234508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.936 [2024-07-24 19:54:46.234915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.936 [2024-07-24 19:54:46.234946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.936 [2024-07-24 19:54:46.234964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.936 [2024-07-24 19:54:46.235202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.936 [2024-07-24 19:54:46.235453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.936 [2024-07-24 19:54:46.235476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.936 [2024-07-24 19:54:46.235491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.936 [2024-07-24 19:54:46.239088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.936 [2024-07-24 19:54:46.248391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.936 [2024-07-24 19:54:46.248791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.936 [2024-07-24 19:54:46.248823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.936 [2024-07-24 19:54:46.248840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.936 [2024-07-24 19:54:46.249080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.936 [2024-07-24 19:54:46.249332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.936 [2024-07-24 19:54:46.249356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.936 [2024-07-24 19:54:46.249371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.936 [2024-07-24 19:54:46.252950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.936 [2024-07-24 19:54:46.262265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.936 [2024-07-24 19:54:46.262648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.936 [2024-07-24 19:54:46.262679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.936 [2024-07-24 19:54:46.262697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.936 [2024-07-24 19:54:46.262936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.936 [2024-07-24 19:54:46.263177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.937 [2024-07-24 19:54:46.263200] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.937 [2024-07-24 19:54:46.263215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.937 [2024-07-24 19:54:46.266797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.937 [2024-07-24 19:54:46.276315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.937 [2024-07-24 19:54:46.276714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.937 [2024-07-24 19:54:46.276745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.937 [2024-07-24 19:54:46.276763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.937 [2024-07-24 19:54:46.277001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.937 [2024-07-24 19:54:46.277251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.937 [2024-07-24 19:54:46.277275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.937 [2024-07-24 19:54:46.277290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.937 [2024-07-24 19:54:46.280859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.937 [2024-07-24 19:54:46.290169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.937 [2024-07-24 19:54:46.290593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.937 [2024-07-24 19:54:46.290624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.937 [2024-07-24 19:54:46.290641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.937 [2024-07-24 19:54:46.290879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.937 [2024-07-24 19:54:46.291121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.937 [2024-07-24 19:54:46.291146] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.937 [2024-07-24 19:54:46.291161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.937 [2024-07-24 19:54:46.294745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:28.937 [2024-07-24 19:54:46.304046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:28.937 [2024-07-24 19:54:46.304467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:28.937 [2024-07-24 19:54:46.304497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:28.937 [2024-07-24 19:54:46.304521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:28.937 [2024-07-24 19:54:46.304759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:28.937 [2024-07-24 19:54:46.305001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:28.937 [2024-07-24 19:54:46.305024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:28.937 [2024-07-24 19:54:46.305039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:28.937 [2024-07-24 19:54:46.308620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.197 [2024-07-24 19:54:46.317943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.197 [2024-07-24 19:54:46.318348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.197 [2024-07-24 19:54:46.318380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.197 [2024-07-24 19:54:46.318398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.197 [2024-07-24 19:54:46.318636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.197 [2024-07-24 19:54:46.318878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.197 [2024-07-24 19:54:46.318901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.197 [2024-07-24 19:54:46.318916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.197 [2024-07-24 19:54:46.322489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.197 [2024-07-24 19:54:46.331794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.197 [2024-07-24 19:54:46.332179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.197 [2024-07-24 19:54:46.332210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.197 [2024-07-24 19:54:46.332227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.197 [2024-07-24 19:54:46.332474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.197 [2024-07-24 19:54:46.332716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.197 [2024-07-24 19:54:46.332740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.197 [2024-07-24 19:54:46.332754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.197 [2024-07-24 19:54:46.336329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.197 [2024-07-24 19:54:46.345630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.197 [2024-07-24 19:54:46.346018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.197 [2024-07-24 19:54:46.346049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.197 [2024-07-24 19:54:46.346067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.197 [2024-07-24 19:54:46.346313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.197 [2024-07-24 19:54:46.346556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.197 [2024-07-24 19:54:46.346584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.197 [2024-07-24 19:54:46.346600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.197 [2024-07-24 19:54:46.350168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.197 [2024-07-24 19:54:46.359479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.197 [2024-07-24 19:54:46.359900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.197 [2024-07-24 19:54:46.359931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.197 [2024-07-24 19:54:46.359948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.197 [2024-07-24 19:54:46.360186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.197 [2024-07-24 19:54:46.360437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.197 [2024-07-24 19:54:46.360460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.197 [2024-07-24 19:54:46.360476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.197 [2024-07-24 19:54:46.364045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.197 [2024-07-24 19:54:46.373423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.197 [2024-07-24 19:54:46.373836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.197 [2024-07-24 19:54:46.373866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.197 [2024-07-24 19:54:46.373884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.197 [2024-07-24 19:54:46.374122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.197 [2024-07-24 19:54:46.374373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.197 [2024-07-24 19:54:46.374397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.197 [2024-07-24 19:54:46.374412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.197 [2024-07-24 19:54:46.377988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.197 [2024-07-24 19:54:46.387290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.197 [2024-07-24 19:54:46.387718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.197 [2024-07-24 19:54:46.387749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.197 [2024-07-24 19:54:46.387767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.197 [2024-07-24 19:54:46.388005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.197 [2024-07-24 19:54:46.388257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.197 [2024-07-24 19:54:46.388281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.197 [2024-07-24 19:54:46.388295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.197 [2024-07-24 19:54:46.391865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.197 [2024-07-24 19:54:46.401167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.197 [2024-07-24 19:54:46.401555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.197 [2024-07-24 19:54:46.401586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.197 [2024-07-24 19:54:46.401604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.197 [2024-07-24 19:54:46.401842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.197 [2024-07-24 19:54:46.402083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.197 [2024-07-24 19:54:46.402106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.198 [2024-07-24 19:54:46.402121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.198 [2024-07-24 19:54:46.405700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.198 [2024-07-24 19:54:46.415204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.198 [2024-07-24 19:54:46.415619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.198 [2024-07-24 19:54:46.415651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.198 [2024-07-24 19:54:46.415669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.198 [2024-07-24 19:54:46.415907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.198 [2024-07-24 19:54:46.416150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.198 [2024-07-24 19:54:46.416172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.198 [2024-07-24 19:54:46.416187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.198 [2024-07-24 19:54:46.419767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.198 [2024-07-24 19:54:46.429063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.198 [2024-07-24 19:54:46.429477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.198 [2024-07-24 19:54:46.429508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.198 [2024-07-24 19:54:46.429526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.198 [2024-07-24 19:54:46.429764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.198 [2024-07-24 19:54:46.430006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.198 [2024-07-24 19:54:46.430028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.198 [2024-07-24 19:54:46.430043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.198 [2024-07-24 19:54:46.433621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.198 [2024-07-24 19:54:46.442910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.198 [2024-07-24 19:54:46.443296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.198 [2024-07-24 19:54:46.443327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.198 [2024-07-24 19:54:46.443344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.198 [2024-07-24 19:54:46.443588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.198 [2024-07-24 19:54:46.443830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.198 [2024-07-24 19:54:46.443852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.198 [2024-07-24 19:54:46.443866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.198 [2024-07-24 19:54:46.447444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.198 [2024-07-24 19:54:46.456947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.198 [2024-07-24 19:54:46.457338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.198 [2024-07-24 19:54:46.457369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.198 [2024-07-24 19:54:46.457386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.198 [2024-07-24 19:54:46.457624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.198 [2024-07-24 19:54:46.457867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.198 [2024-07-24 19:54:46.457889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.198 [2024-07-24 19:54:46.457904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.198 [2024-07-24 19:54:46.461492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.198 [2024-07-24 19:54:46.470793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.198 [2024-07-24 19:54:46.471182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.198 [2024-07-24 19:54:46.471211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.198 [2024-07-24 19:54:46.471229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.198 [2024-07-24 19:54:46.471475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.198 [2024-07-24 19:54:46.471718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.198 [2024-07-24 19:54:46.471741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.198 [2024-07-24 19:54:46.471756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.198 [2024-07-24 19:54:46.475332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.198 [2024-07-24 19:54:46.484642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.198 [2024-07-24 19:54:46.485061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.198 [2024-07-24 19:54:46.485093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.198 [2024-07-24 19:54:46.485110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.198 [2024-07-24 19:54:46.485359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.198 [2024-07-24 19:54:46.485602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.198 [2024-07-24 19:54:46.485625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.198 [2024-07-24 19:54:46.485646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.198 [2024-07-24 19:54:46.489216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.198 [2024-07-24 19:54:46.498515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.198 [2024-07-24 19:54:46.498905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.198 [2024-07-24 19:54:46.498936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.198 [2024-07-24 19:54:46.498954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.198 [2024-07-24 19:54:46.499192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.198 [2024-07-24 19:54:46.499442] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.198 [2024-07-24 19:54:46.499466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.198 [2024-07-24 19:54:46.499481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.198 [2024-07-24 19:54:46.503048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.198 [2024-07-24 19:54:46.512349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.198 [2024-07-24 19:54:46.512760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.198 [2024-07-24 19:54:46.512791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.198 [2024-07-24 19:54:46.512808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.198 [2024-07-24 19:54:46.513046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.198 [2024-07-24 19:54:46.513299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.198 [2024-07-24 19:54:46.513322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.198 [2024-07-24 19:54:46.513337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.198 [2024-07-24 19:54:46.516905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.198 [2024-07-24 19:54:46.526193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.198 [2024-07-24 19:54:46.526605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.198 [2024-07-24 19:54:46.526635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.198 [2024-07-24 19:54:46.526653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.198 [2024-07-24 19:54:46.526890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.198 [2024-07-24 19:54:46.527131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.198 [2024-07-24 19:54:46.527154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.198 [2024-07-24 19:54:46.527169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.198 [2024-07-24 19:54:46.530745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.198 [2024-07-24 19:54:46.540037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.198 [2024-07-24 19:54:46.540431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.198 [2024-07-24 19:54:46.540462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.198 [2024-07-24 19:54:46.540480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.198 [2024-07-24 19:54:46.540717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.198 [2024-07-24 19:54:46.540959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.198 [2024-07-24 19:54:46.540981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.199 [2024-07-24 19:54:46.540996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.199 [2024-07-24 19:54:46.544572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.199 [2024-07-24 19:54:46.554074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.199 [2024-07-24 19:54:46.554446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.199 [2024-07-24 19:54:46.554477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.199 [2024-07-24 19:54:46.554495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.199 [2024-07-24 19:54:46.554733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.199 [2024-07-24 19:54:46.554975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.199 [2024-07-24 19:54:46.554998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.199 [2024-07-24 19:54:46.555013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.199 [2024-07-24 19:54:46.558630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.199 [2024-07-24 19:54:46.567924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.199 [2024-07-24 19:54:46.568341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.199 [2024-07-24 19:54:46.568372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.199 [2024-07-24 19:54:46.568390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.199 [2024-07-24 19:54:46.568628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.199 [2024-07-24 19:54:46.568870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.199 [2024-07-24 19:54:46.568893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.199 [2024-07-24 19:54:46.568908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.199 [2024-07-24 19:54:46.572486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.458 [2024-07-24 19:54:46.581791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.458 [2024-07-24 19:54:46.582202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.458 [2024-07-24 19:54:46.582232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.458 [2024-07-24 19:54:46.582260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.458 [2024-07-24 19:54:46.582499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.458 [2024-07-24 19:54:46.582748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.458 [2024-07-24 19:54:46.582771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.458 [2024-07-24 19:54:46.582785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.458 [2024-07-24 19:54:46.586362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.458 [2024-07-24 19:54:46.595655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.458 [2024-07-24 19:54:46.596081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.458 [2024-07-24 19:54:46.596112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.458 [2024-07-24 19:54:46.596130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.458 [2024-07-24 19:54:46.596378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.458 [2024-07-24 19:54:46.596620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.459 [2024-07-24 19:54:46.596642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.459 [2024-07-24 19:54:46.596657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.459 [2024-07-24 19:54:46.600224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.459 [2024-07-24 19:54:46.609528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.459 [2024-07-24 19:54:46.609936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.459 [2024-07-24 19:54:46.609966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.459 [2024-07-24 19:54:46.609984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.459 [2024-07-24 19:54:46.610222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.459 [2024-07-24 19:54:46.610472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.459 [2024-07-24 19:54:46.610495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.459 [2024-07-24 19:54:46.610510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.459 [2024-07-24 19:54:46.614078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.459 [2024-07-24 19:54:46.623390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.459 [2024-07-24 19:54:46.623791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.459 [2024-07-24 19:54:46.623822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.459 [2024-07-24 19:54:46.623839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.459 [2024-07-24 19:54:46.624078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.459 [2024-07-24 19:54:46.624329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.459 [2024-07-24 19:54:46.624353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.459 [2024-07-24 19:54:46.624368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.459 [2024-07-24 19:54:46.627945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.459 [2024-07-24 19:54:46.637251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.459 [2024-07-24 19:54:46.637673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.459 [2024-07-24 19:54:46.637703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.459 [2024-07-24 19:54:46.637721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.459 [2024-07-24 19:54:46.637958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.459 [2024-07-24 19:54:46.638202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.459 [2024-07-24 19:54:46.638225] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.459 [2024-07-24 19:54:46.638239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.459 [2024-07-24 19:54:46.641827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.459 [2024-07-24 19:54:46.651162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.459 [2024-07-24 19:54:46.651567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.459 [2024-07-24 19:54:46.651597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.459 [2024-07-24 19:54:46.651615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.459 [2024-07-24 19:54:46.651852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.459 [2024-07-24 19:54:46.652093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.459 [2024-07-24 19:54:46.652115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.459 [2024-07-24 19:54:46.652130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.459 [2024-07-24 19:54:46.655712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.459 [2024-07-24 19:54:46.665025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.459 [2024-07-24 19:54:46.665398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.459 [2024-07-24 19:54:46.665429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.459 [2024-07-24 19:54:46.665447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.459 [2024-07-24 19:54:46.665684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.459 [2024-07-24 19:54:46.665925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.459 [2024-07-24 19:54:46.665948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.459 [2024-07-24 19:54:46.665963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.459 [2024-07-24 19:54:46.669544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.459 [2024-07-24 19:54:46.679072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.459 [2024-07-24 19:54:46.679479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.459 [2024-07-24 19:54:46.679510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.459 [2024-07-24 19:54:46.679533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.459 [2024-07-24 19:54:46.679772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.459 [2024-07-24 19:54:46.680014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.459 [2024-07-24 19:54:46.680036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.459 [2024-07-24 19:54:46.680051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.459 [2024-07-24 19:54:46.683637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.459 [2024-07-24 19:54:46.692961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.459 [2024-07-24 19:54:46.693383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.459 [2024-07-24 19:54:46.693415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.459 [2024-07-24 19:54:46.693433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.459 [2024-07-24 19:54:46.693671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.459 [2024-07-24 19:54:46.693913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.459 [2024-07-24 19:54:46.693935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.459 [2024-07-24 19:54:46.693950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.459 [2024-07-24 19:54:46.697533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.459 [2024-07-24 19:54:46.706836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.459 [2024-07-24 19:54:46.707252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.459 [2024-07-24 19:54:46.707283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.459 [2024-07-24 19:54:46.707301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.459 [2024-07-24 19:54:46.707539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.459 [2024-07-24 19:54:46.707781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.459 [2024-07-24 19:54:46.707804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.459 [2024-07-24 19:54:46.707819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.459 [2024-07-24 19:54:46.711395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.459 [2024-07-24 19:54:46.720700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.459 [2024-07-24 19:54:46.721080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.459 [2024-07-24 19:54:46.721110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.459 [2024-07-24 19:54:46.721128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.459 [2024-07-24 19:54:46.721375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.459 [2024-07-24 19:54:46.721617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.459 [2024-07-24 19:54:46.721646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.459 [2024-07-24 19:54:46.721662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.459 [2024-07-24 19:54:46.725232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.459 [2024-07-24 19:54:46.734546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.459 [2024-07-24 19:54:46.734956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.459 [2024-07-24 19:54:46.734987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.459 [2024-07-24 19:54:46.735005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.459 [2024-07-24 19:54:46.735251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.459 [2024-07-24 19:54:46.735494] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.459 [2024-07-24 19:54:46.735516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.459 [2024-07-24 19:54:46.735531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.460 [2024-07-24 19:54:46.739103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.460 [2024-07-24 19:54:46.748418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.460 [2024-07-24 19:54:46.748817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.460 [2024-07-24 19:54:46.748847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.460 [2024-07-24 19:54:46.748864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.460 [2024-07-24 19:54:46.749102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.460 [2024-07-24 19:54:46.749355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.460 [2024-07-24 19:54:46.749379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.460 [2024-07-24 19:54:46.749393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.460 [2024-07-24 19:54:46.752961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.460 [2024-07-24 19:54:46.762306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.460 [2024-07-24 19:54:46.762715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.460 [2024-07-24 19:54:46.762746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.460 [2024-07-24 19:54:46.762764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.460 [2024-07-24 19:54:46.763001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.460 [2024-07-24 19:54:46.763251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.460 [2024-07-24 19:54:46.763275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.460 [2024-07-24 19:54:46.763290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.460 [2024-07-24 19:54:46.766862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.460 [2024-07-24 19:54:46.776161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.460 [2024-07-24 19:54:46.776577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.460 [2024-07-24 19:54:46.776608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.460 [2024-07-24 19:54:46.776625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.460 [2024-07-24 19:54:46.776863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.460 [2024-07-24 19:54:46.777104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.460 [2024-07-24 19:54:46.777127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.460 [2024-07-24 19:54:46.777142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.460 [2024-07-24 19:54:46.780717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.460 [2024-07-24 19:54:46.790012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.460 [2024-07-24 19:54:46.790438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.460 [2024-07-24 19:54:46.790469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.460 [2024-07-24 19:54:46.790487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.460 [2024-07-24 19:54:46.790724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.460 [2024-07-24 19:54:46.790966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.460 [2024-07-24 19:54:46.790989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.460 [2024-07-24 19:54:46.791004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.460 [2024-07-24 19:54:46.794586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.460 [2024-07-24 19:54:46.803892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.460 [2024-07-24 19:54:46.804302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.460 [2024-07-24 19:54:46.804334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.460 [2024-07-24 19:54:46.804353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.460 [2024-07-24 19:54:46.804591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.460 [2024-07-24 19:54:46.804833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.460 [2024-07-24 19:54:46.804856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.460 [2024-07-24 19:54:46.804870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.460 [2024-07-24 19:54:46.808447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.460 [2024-07-24 19:54:46.817742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.460 [2024-07-24 19:54:46.818132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.460 [2024-07-24 19:54:46.818164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.460 [2024-07-24 19:54:46.818182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.460 [2024-07-24 19:54:46.818436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.460 [2024-07-24 19:54:46.818679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.460 [2024-07-24 19:54:46.818702] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.460 [2024-07-24 19:54:46.818717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.460 [2024-07-24 19:54:46.822305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.460 [2024-07-24 19:54:46.831602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.460 [2024-07-24 19:54:46.831985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.460 [2024-07-24 19:54:46.832015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.460 [2024-07-24 19:54:46.832033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.460 [2024-07-24 19:54:46.832281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.460 [2024-07-24 19:54:46.832524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.460 [2024-07-24 19:54:46.832547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.460 [2024-07-24 19:54:46.832562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.460 [2024-07-24 19:54:46.836130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.720 [2024-07-24 19:54:46.845453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.720 [2024-07-24 19:54:46.845867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.720 [2024-07-24 19:54:46.845898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.720 [2024-07-24 19:54:46.845916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.720 [2024-07-24 19:54:46.846154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.720 [2024-07-24 19:54:46.846406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.720 [2024-07-24 19:54:46.846429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.720 [2024-07-24 19:54:46.846444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.720 [2024-07-24 19:54:46.850013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.720 [2024-07-24 19:54:46.859316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.720 [2024-07-24 19:54:46.859723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.720 [2024-07-24 19:54:46.859754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.720 [2024-07-24 19:54:46.859771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.720 [2024-07-24 19:54:46.860009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.720 [2024-07-24 19:54:46.860259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.720 [2024-07-24 19:54:46.860283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.720 [2024-07-24 19:54:46.860304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.720 [2024-07-24 19:54:46.863873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.720 [2024-07-24 19:54:46.873172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.720 [2024-07-24 19:54:46.873600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.720 [2024-07-24 19:54:46.873631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.720 [2024-07-24 19:54:46.873648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.720 [2024-07-24 19:54:46.873886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.720 [2024-07-24 19:54:46.874128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.720 [2024-07-24 19:54:46.874151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.720 [2024-07-24 19:54:46.874165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.720 [2024-07-24 19:54:46.878007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.720 [2024-07-24 19:54:46.887113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.720 [2024-07-24 19:54:46.887537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.720 [2024-07-24 19:54:46.887568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.720 [2024-07-24 19:54:46.887586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.720 [2024-07-24 19:54:46.887824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.720 [2024-07-24 19:54:46.888066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.720 [2024-07-24 19:54:46.888089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.720 [2024-07-24 19:54:46.888103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.720 [2024-07-24 19:54:46.891679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.720 [2024-07-24 19:54:46.900972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.720 [2024-07-24 19:54:46.901380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.720 [2024-07-24 19:54:46.901411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.720 [2024-07-24 19:54:46.901428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.720 [2024-07-24 19:54:46.901666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.720 [2024-07-24 19:54:46.901907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.720 [2024-07-24 19:54:46.901929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.720 [2024-07-24 19:54:46.901944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.720 [2024-07-24 19:54:46.905520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.720 [2024-07-24 19:54:46.915019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.720 [2024-07-24 19:54:46.915431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.720 [2024-07-24 19:54:46.915462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.720 [2024-07-24 19:54:46.915480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.720 [2024-07-24 19:54:46.915718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.720 [2024-07-24 19:54:46.915960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.720 [2024-07-24 19:54:46.915983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.720 [2024-07-24 19:54:46.915997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.720 [2024-07-24 19:54:46.919575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.721 [2024-07-24 19:54:46.928868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.721 [2024-07-24 19:54:46.929262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.721 [2024-07-24 19:54:46.929293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.721 [2024-07-24 19:54:46.929311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.721 [2024-07-24 19:54:46.929548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.721 [2024-07-24 19:54:46.929790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.721 [2024-07-24 19:54:46.929813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.721 [2024-07-24 19:54:46.929827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.721 [2024-07-24 19:54:46.933404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.721 [2024-07-24 19:54:46.942907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.721 [2024-07-24 19:54:46.943301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.721 [2024-07-24 19:54:46.943332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.721 [2024-07-24 19:54:46.943350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.721 [2024-07-24 19:54:46.943588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.721 [2024-07-24 19:54:46.943830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.721 [2024-07-24 19:54:46.943853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.721 [2024-07-24 19:54:46.943868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.721 [2024-07-24 19:54:46.947446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.721 [2024-07-24 19:54:46.956952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.721 [2024-07-24 19:54:46.957347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.721 [2024-07-24 19:54:46.957378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.721 [2024-07-24 19:54:46.957395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.721 [2024-07-24 19:54:46.957633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.721 [2024-07-24 19:54:46.957884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.721 [2024-07-24 19:54:46.957907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.721 [2024-07-24 19:54:46.957922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.721 [2024-07-24 19:54:46.961514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.721 [2024-07-24 19:54:46.970812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.721 [2024-07-24 19:54:46.971223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.721 [2024-07-24 19:54:46.971260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.721 [2024-07-24 19:54:46.971280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.721 [2024-07-24 19:54:46.971518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.721 [2024-07-24 19:54:46.971760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.721 [2024-07-24 19:54:46.971783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.721 [2024-07-24 19:54:46.971797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.721 [2024-07-24 19:54:46.975375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.721 [2024-07-24 19:54:46.984677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.721 [2024-07-24 19:54:46.985058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.721 [2024-07-24 19:54:46.985088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.721 [2024-07-24 19:54:46.985106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.721 [2024-07-24 19:54:46.985354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.721 [2024-07-24 19:54:46.985597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.721 [2024-07-24 19:54:46.985619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.721 [2024-07-24 19:54:46.985634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.721 [2024-07-24 19:54:46.989202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.721 [2024-07-24 19:54:46.998707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.721 [2024-07-24 19:54:46.999122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.721 [2024-07-24 19:54:46.999152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.721 [2024-07-24 19:54:46.999170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.721 [2024-07-24 19:54:46.999417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.721 [2024-07-24 19:54:46.999660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.721 [2024-07-24 19:54:46.999683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.721 [2024-07-24 19:54:46.999697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.721 [2024-07-24 19:54:47.003277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.721 [2024-07-24 19:54:47.012605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.721 [2024-07-24 19:54:47.012993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.721 [2024-07-24 19:54:47.013024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.721 [2024-07-24 19:54:47.013042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.721 [2024-07-24 19:54:47.013294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.721 [2024-07-24 19:54:47.013537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.721 [2024-07-24 19:54:47.013559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.721 [2024-07-24 19:54:47.013574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.721 [2024-07-24 19:54:47.017143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.721 [2024-07-24 19:54:47.026449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.721 [2024-07-24 19:54:47.026855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.721 [2024-07-24 19:54:47.026886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.721 [2024-07-24 19:54:47.026903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.721 [2024-07-24 19:54:47.027140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.721 [2024-07-24 19:54:47.027395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.721 [2024-07-24 19:54:47.027419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.721 [2024-07-24 19:54:47.027434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.721 [2024-07-24 19:54:47.031004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.721 [2024-07-24 19:54:47.040310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.721 [2024-07-24 19:54:47.040719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.721 [2024-07-24 19:54:47.040750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.721 [2024-07-24 19:54:47.040769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.721 [2024-07-24 19:54:47.041007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.721 [2024-07-24 19:54:47.041259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.721 [2024-07-24 19:54:47.041283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.721 [2024-07-24 19:54:47.041297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.721 [2024-07-24 19:54:47.044865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.721 [2024-07-24 19:54:47.054178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.721 [2024-07-24 19:54:47.054595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.721 [2024-07-24 19:54:47.054626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.721 [2024-07-24 19:54:47.054650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.721 [2024-07-24 19:54:47.054888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.721 [2024-07-24 19:54:47.055130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.721 [2024-07-24 19:54:47.055153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.721 [2024-07-24 19:54:47.055168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.721 [2024-07-24 19:54:47.058745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.721 [2024-07-24 19:54:47.068057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.721 [2024-07-24 19:54:47.068487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.721 [2024-07-24 19:54:47.068519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.721 [2024-07-24 19:54:47.068536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.721 [2024-07-24 19:54:47.068773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.721 [2024-07-24 19:54:47.069015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.721 [2024-07-24 19:54:47.069038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.721 [2024-07-24 19:54:47.069052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.721 [2024-07-24 19:54:47.072657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.721 [2024-07-24 19:54:47.081977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.721 [2024-07-24 19:54:47.082389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.721 [2024-07-24 19:54:47.082420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.722 [2024-07-24 19:54:47.082437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.722 [2024-07-24 19:54:47.082675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.722 [2024-07-24 19:54:47.082917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.722 [2024-07-24 19:54:47.082940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.722 [2024-07-24 19:54:47.082955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.722 [2024-07-24 19:54:47.086553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.722 [2024-07-24 19:54:47.095861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.722 [2024-07-24 19:54:47.096271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.722 [2024-07-24 19:54:47.096311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.722 [2024-07-24 19:54:47.096329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.722 [2024-07-24 19:54:47.096568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.722 [2024-07-24 19:54:47.096809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.722 [2024-07-24 19:54:47.096837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.722 [2024-07-24 19:54:47.096853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.981 [2024-07-24 19:54:47.100429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.981 [2024-07-24 19:54:47.109734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.981 [2024-07-24 19:54:47.110158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.981 [2024-07-24 19:54:47.110189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.981 [2024-07-24 19:54:47.110206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.981 [2024-07-24 19:54:47.110462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.981 [2024-07-24 19:54:47.110705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.981 [2024-07-24 19:54:47.110727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.981 [2024-07-24 19:54:47.110742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.981 [2024-07-24 19:54:47.114322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.981 [2024-07-24 19:54:47.123619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.981 [2024-07-24 19:54:47.124012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.981 [2024-07-24 19:54:47.124043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.981 [2024-07-24 19:54:47.124060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.981 [2024-07-24 19:54:47.124308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.981 [2024-07-24 19:54:47.124550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.981 [2024-07-24 19:54:47.124573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.981 [2024-07-24 19:54:47.124588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.981 [2024-07-24 19:54:47.128163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.981 [2024-07-24 19:54:47.137503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.981 [2024-07-24 19:54:47.137891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.981 [2024-07-24 19:54:47.137922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.981 [2024-07-24 19:54:47.137939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.981 [2024-07-24 19:54:47.138177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.981 [2024-07-24 19:54:47.138431] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.981 [2024-07-24 19:54:47.138454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.981 [2024-07-24 19:54:47.138469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.981 [2024-07-24 19:54:47.142037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.981 [2024-07-24 19:54:47.151357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.981 [2024-07-24 19:54:47.151770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.981 [2024-07-24 19:54:47.151802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.981 [2024-07-24 19:54:47.151819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.981 [2024-07-24 19:54:47.152057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.981 [2024-07-24 19:54:47.152310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.981 [2024-07-24 19:54:47.152334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.981 [2024-07-24 19:54:47.152349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.981 [2024-07-24 19:54:47.155917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.981 [2024-07-24 19:54:47.165225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.981 [2024-07-24 19:54:47.165643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.981 [2024-07-24 19:54:47.165674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.981 [2024-07-24 19:54:47.165691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.981 [2024-07-24 19:54:47.165929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.981 [2024-07-24 19:54:47.166171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.981 [2024-07-24 19:54:47.166193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.981 [2024-07-24 19:54:47.166208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.981 [2024-07-24 19:54:47.169787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.981 [2024-07-24 19:54:47.179093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.981 [2024-07-24 19:54:47.179488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.981 [2024-07-24 19:54:47.179519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.981 [2024-07-24 19:54:47.179537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.981 [2024-07-24 19:54:47.179775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.981 [2024-07-24 19:54:47.180017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.981 [2024-07-24 19:54:47.180039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.981 [2024-07-24 19:54:47.180054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.981 [2024-07-24 19:54:47.183633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.981 [2024-07-24 19:54:47.192936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.981 [2024-07-24 19:54:47.193350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.981 [2024-07-24 19:54:47.193381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.981 [2024-07-24 19:54:47.193398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.981 [2024-07-24 19:54:47.193642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.981 [2024-07-24 19:54:47.193883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.981 [2024-07-24 19:54:47.193906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.981 [2024-07-24 19:54:47.193921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.981 [2024-07-24 19:54:47.197502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.981 [2024-07-24 19:54:47.206797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.981 [2024-07-24 19:54:47.207203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.981 [2024-07-24 19:54:47.207234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.981 [2024-07-24 19:54:47.207261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.981 [2024-07-24 19:54:47.207500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.981 [2024-07-24 19:54:47.207742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.981 [2024-07-24 19:54:47.207764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.981 [2024-07-24 19:54:47.207779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.981 [2024-07-24 19:54:47.211355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.981 [2024-07-24 19:54:47.220652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.981 [2024-07-24 19:54:47.221072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.981 [2024-07-24 19:54:47.221103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.981 [2024-07-24 19:54:47.221121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.981 [2024-07-24 19:54:47.221370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.981 [2024-07-24 19:54:47.221612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.981 [2024-07-24 19:54:47.221635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.981 [2024-07-24 19:54:47.221649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.981 [2024-07-24 19:54:47.225226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.981 [2024-07-24 19:54:47.234535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.981 [2024-07-24 19:54:47.234924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.981 [2024-07-24 19:54:47.234955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.981 [2024-07-24 19:54:47.234972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.981 [2024-07-24 19:54:47.235211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.982 [2024-07-24 19:54:47.235463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.982 [2024-07-24 19:54:47.235486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.982 [2024-07-24 19:54:47.235507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.982 [2024-07-24 19:54:47.239079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.982 [2024-07-24 19:54:47.248384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.982 [2024-07-24 19:54:47.248875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.982 [2024-07-24 19:54:47.248906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.982 [2024-07-24 19:54:47.248923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.982 [2024-07-24 19:54:47.249161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.982 [2024-07-24 19:54:47.249413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.982 [2024-07-24 19:54:47.249437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.982 [2024-07-24 19:54:47.249452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.982 [2024-07-24 19:54:47.253023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.982 [2024-07-24 19:54:47.262368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.982 [2024-07-24 19:54:47.262728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.982 [2024-07-24 19:54:47.262758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.982 [2024-07-24 19:54:47.262776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.982 [2024-07-24 19:54:47.263014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.982 [2024-07-24 19:54:47.263267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.982 [2024-07-24 19:54:47.263290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.982 [2024-07-24 19:54:47.263305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.982 [2024-07-24 19:54:47.266875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.982 [2024-07-24 19:54:47.276395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.982 [2024-07-24 19:54:47.276805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.982 [2024-07-24 19:54:47.276835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.982 [2024-07-24 19:54:47.276853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.982 [2024-07-24 19:54:47.277091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.982 [2024-07-24 19:54:47.277344] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.982 [2024-07-24 19:54:47.277367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.982 [2024-07-24 19:54:47.277382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.982 [2024-07-24 19:54:47.280952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.982 [2024-07-24 19:54:47.290265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.982 [2024-07-24 19:54:47.290822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.982 [2024-07-24 19:54:47.290853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.982 [2024-07-24 19:54:47.290870] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.982 [2024-07-24 19:54:47.291109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.982 [2024-07-24 19:54:47.291361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.982 [2024-07-24 19:54:47.291385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.982 [2024-07-24 19:54:47.291400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.982 [2024-07-24 19:54:47.294968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.982 [2024-07-24 19:54:47.304274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.982 [2024-07-24 19:54:47.304683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.982 [2024-07-24 19:54:47.304714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.982 [2024-07-24 19:54:47.304731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.982 [2024-07-24 19:54:47.304968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.982 [2024-07-24 19:54:47.305211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.982 [2024-07-24 19:54:47.305233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.982 [2024-07-24 19:54:47.305258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.982 [2024-07-24 19:54:47.308829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.982 [2024-07-24 19:54:47.318127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.982 [2024-07-24 19:54:47.318557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.982 [2024-07-24 19:54:47.318587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.982 [2024-07-24 19:54:47.318604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.982 [2024-07-24 19:54:47.318842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.982 [2024-07-24 19:54:47.319084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.982 [2024-07-24 19:54:47.319107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.982 [2024-07-24 19:54:47.319122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.982 [2024-07-24 19:54:47.322700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.982 [2024-07-24 19:54:47.332010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.982 [2024-07-24 19:54:47.332405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.982 [2024-07-24 19:54:47.332435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.982 [2024-07-24 19:54:47.332452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.982 [2024-07-24 19:54:47.332690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.982 [2024-07-24 19:54:47.332938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.982 [2024-07-24 19:54:47.332961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.982 [2024-07-24 19:54:47.332976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.982 [2024-07-24 19:54:47.336553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:29.982 [2024-07-24 19:54:47.345860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:29.982 [2024-07-24 19:54:47.346285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:29.982 [2024-07-24 19:54:47.346316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:29.982 [2024-07-24 19:54:47.346334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:29.982 [2024-07-24 19:54:47.346573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:29.982 [2024-07-24 19:54:47.346814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:29.982 [2024-07-24 19:54:47.346837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:29.982 [2024-07-24 19:54:47.346853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:29.982 [2024-07-24 19:54:47.350426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.243 [2024-07-24 19:54:47.359725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.243 [2024-07-24 19:54:47.360112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-24 19:54:47.360143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.243 [2024-07-24 19:54:47.360160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.243 [2024-07-24 19:54:47.360408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.243 [2024-07-24 19:54:47.360651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.243 [2024-07-24 19:54:47.360674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.243 [2024-07-24 19:54:47.360689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.243 [2024-07-24 19:54:47.364280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.243 [2024-07-24 19:54:47.373578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.243 [2024-07-24 19:54:47.373964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.243 [2024-07-24 19:54:47.373996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.243 [2024-07-24 19:54:47.374014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.243 [2024-07-24 19:54:47.374262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.244 [2024-07-24 19:54:47.374506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.244 [2024-07-24 19:54:47.374529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.244 [2024-07-24 19:54:47.374543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.244 [2024-07-24 19:54:47.378121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.244 [2024-07-24 19:54:47.387441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.244 [2024-07-24 19:54:47.387909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-24 19:54:47.387959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.244 [2024-07-24 19:54:47.387977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.244 [2024-07-24 19:54:47.388215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.244 [2024-07-24 19:54:47.388595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.244 [2024-07-24 19:54:47.388621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.244 [2024-07-24 19:54:47.388636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.244 [2024-07-24 19:54:47.392210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.244 [2024-07-24 19:54:47.401328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.244 [2024-07-24 19:54:47.401810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-24 19:54:47.401841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.244 [2024-07-24 19:54:47.401859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.244 [2024-07-24 19:54:47.402097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.244 [2024-07-24 19:54:47.402349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.244 [2024-07-24 19:54:47.402376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.244 [2024-07-24 19:54:47.402390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.244 [2024-07-24 19:54:47.405963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.244 [2024-07-24 19:54:47.415282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.244 [2024-07-24 19:54:47.415666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-24 19:54:47.415697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.244 [2024-07-24 19:54:47.415715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.244 [2024-07-24 19:54:47.415954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.244 [2024-07-24 19:54:47.416196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.244 [2024-07-24 19:54:47.416218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.244 [2024-07-24 19:54:47.416234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.244 [2024-07-24 19:54:47.419815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.244 [2024-07-24 19:54:47.429123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.244 [2024-07-24 19:54:47.429558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-24 19:54:47.429611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.244 [2024-07-24 19:54:47.429634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.244 [2024-07-24 19:54:47.429873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.244 [2024-07-24 19:54:47.430115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.244 [2024-07-24 19:54:47.430138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.244 [2024-07-24 19:54:47.430153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.244 [2024-07-24 19:54:47.433734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.244 [2024-07-24 19:54:47.443041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.244 [2024-07-24 19:54:47.443433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-24 19:54:47.443464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.244 [2024-07-24 19:54:47.443482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.244 [2024-07-24 19:54:47.443720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.244 [2024-07-24 19:54:47.443962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.244 [2024-07-24 19:54:47.443985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.244 [2024-07-24 19:54:47.444001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.244 [2024-07-24 19:54:47.447578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.244 [2024-07-24 19:54:47.456879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.244 [2024-07-24 19:54:47.457264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-24 19:54:47.457294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.244 [2024-07-24 19:54:47.457312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.244 [2024-07-24 19:54:47.457549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.244 [2024-07-24 19:54:47.457791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.244 [2024-07-24 19:54:47.457813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.244 [2024-07-24 19:54:47.457828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.244 [2024-07-24 19:54:47.461412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.244 [2024-07-24 19:54:47.470731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.244 [2024-07-24 19:54:47.471141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-24 19:54:47.471172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.244 [2024-07-24 19:54:47.471190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.244 [2024-07-24 19:54:47.471438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.244 [2024-07-24 19:54:47.471680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.244 [2024-07-24 19:54:47.471709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.244 [2024-07-24 19:54:47.471725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.244 [2024-07-24 19:54:47.475310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.244 [2024-07-24 19:54:47.484638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.244 [2024-07-24 19:54:47.485023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-24 19:54:47.485055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.244 [2024-07-24 19:54:47.485083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.244 [2024-07-24 19:54:47.485341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.244 [2024-07-24 19:54:47.485584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.244 [2024-07-24 19:54:47.485607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.244 [2024-07-24 19:54:47.485622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.244 [2024-07-24 19:54:47.489195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.244 [2024-07-24 19:54:47.498506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.244 [2024-07-24 19:54:47.498918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-24 19:54:47.498948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.244 [2024-07-24 19:54:47.498966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.244 [2024-07-24 19:54:47.499203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.244 [2024-07-24 19:54:47.499454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.244 [2024-07-24 19:54:47.499478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.244 [2024-07-24 19:54:47.499493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.244 [2024-07-24 19:54:47.503069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.244 [2024-07-24 19:54:47.512405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.244 [2024-07-24 19:54:47.512813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.244 [2024-07-24 19:54:47.512844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.244 [2024-07-24 19:54:47.512861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.244 [2024-07-24 19:54:47.513099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.245 [2024-07-24 19:54:47.513351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.245 [2024-07-24 19:54:47.513375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.245 [2024-07-24 19:54:47.513390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.245 [2024-07-24 19:54:47.516961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.245 [2024-07-24 19:54:47.526278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.245 [2024-07-24 19:54:47.526697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-24 19:54:47.526728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.245 [2024-07-24 19:54:47.526745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.245 [2024-07-24 19:54:47.526982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.245 [2024-07-24 19:54:47.527224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.245 [2024-07-24 19:54:47.527256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.245 [2024-07-24 19:54:47.527273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.245 [2024-07-24 19:54:47.530848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.245 [2024-07-24 19:54:47.540142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.245 [2024-07-24 19:54:47.540564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-24 19:54:47.540595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.245 [2024-07-24 19:54:47.540613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.245 [2024-07-24 19:54:47.540851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.245 [2024-07-24 19:54:47.541093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.245 [2024-07-24 19:54:47.541116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.245 [2024-07-24 19:54:47.541131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.245 [2024-07-24 19:54:47.544711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.245 [2024-07-24 19:54:47.554009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.245 [2024-07-24 19:54:47.554396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-24 19:54:47.554427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.245 [2024-07-24 19:54:47.554444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.245 [2024-07-24 19:54:47.554682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.245 [2024-07-24 19:54:47.554924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.245 [2024-07-24 19:54:47.554947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.245 [2024-07-24 19:54:47.554962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.245 [2024-07-24 19:54:47.558539] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.245 [2024-07-24 19:54:47.567844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.245 [2024-07-24 19:54:47.568255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-24 19:54:47.568286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.245 [2024-07-24 19:54:47.568303] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.245 [2024-07-24 19:54:47.568547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.245 [2024-07-24 19:54:47.568789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.245 [2024-07-24 19:54:47.568811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.245 [2024-07-24 19:54:47.568827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.245 [2024-07-24 19:54:47.572404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.245 [2024-07-24 19:54:47.581707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.245 [2024-07-24 19:54:47.582114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-24 19:54:47.582145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.245 [2024-07-24 19:54:47.582162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.245 [2024-07-24 19:54:47.582410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.245 [2024-07-24 19:54:47.582653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.245 [2024-07-24 19:54:47.582676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.245 [2024-07-24 19:54:47.582691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.245 [2024-07-24 19:54:47.586268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.245 [2024-07-24 19:54:47.595566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.245 [2024-07-24 19:54:47.595976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-24 19:54:47.596007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.245 [2024-07-24 19:54:47.596024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.245 [2024-07-24 19:54:47.596273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.245 [2024-07-24 19:54:47.596515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.245 [2024-07-24 19:54:47.596538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.245 [2024-07-24 19:54:47.596553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.245 [2024-07-24 19:54:47.600123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.245 [2024-07-24 19:54:47.609433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.245 [2024-07-24 19:54:47.609852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.245 [2024-07-24 19:54:47.609883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.245 [2024-07-24 19:54:47.609900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.245 [2024-07-24 19:54:47.610138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.245 [2024-07-24 19:54:47.610390] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.245 [2024-07-24 19:54:47.610413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.245 [2024-07-24 19:54:47.610434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.245 [2024-07-24 19:54:47.614007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.506 [2024-07-24 19:54:47.623332] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.506 [2024-07-24 19:54:47.623766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.506 [2024-07-24 19:54:47.623796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.506 [2024-07-24 19:54:47.623814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.506 [2024-07-24 19:54:47.624051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.506 [2024-07-24 19:54:47.624305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.506 [2024-07-24 19:54:47.624328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.506 [2024-07-24 19:54:47.624343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.506 [2024-07-24 19:54:47.627921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.506 [2024-07-24 19:54:47.637222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.506 [2024-07-24 19:54:47.637622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.506 [2024-07-24 19:54:47.637652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.506 [2024-07-24 19:54:47.637670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.506 [2024-07-24 19:54:47.637907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.506 [2024-07-24 19:54:47.638149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.506 [2024-07-24 19:54:47.638172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.506 [2024-07-24 19:54:47.638186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.506 [2024-07-24 19:54:47.641768] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.506 [2024-07-24 19:54:47.651282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.506 [2024-07-24 19:54:47.651660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.506 [2024-07-24 19:54:47.651691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.506 [2024-07-24 19:54:47.651708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.506 [2024-07-24 19:54:47.651946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.506 [2024-07-24 19:54:47.652188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.506 [2024-07-24 19:54:47.652210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.506 [2024-07-24 19:54:47.652225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.506 [2024-07-24 19:54:47.655802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.506 [2024-07-24 19:54:47.665329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.506 [2024-07-24 19:54:47.665848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.506 [2024-07-24 19:54:47.665906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.506 [2024-07-24 19:54:47.665924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.506 [2024-07-24 19:54:47.666162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.506 [2024-07-24 19:54:47.666415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.506 [2024-07-24 19:54:47.666438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.506 [2024-07-24 19:54:47.666453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.506 [2024-07-24 19:54:47.670023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.506 [2024-07-24 19:54:47.679329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.506 [2024-07-24 19:54:47.679715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.506 [2024-07-24 19:54:47.679746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.506 [2024-07-24 19:54:47.679764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.506 [2024-07-24 19:54:47.680002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.506 [2024-07-24 19:54:47.680254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.506 [2024-07-24 19:54:47.680277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.506 [2024-07-24 19:54:47.680292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.506 [2024-07-24 19:54:47.683863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.506 [2024-07-24 19:54:47.693174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.506 [2024-07-24 19:54:47.693572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.506 [2024-07-24 19:54:47.693602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.506 [2024-07-24 19:54:47.693620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.506 [2024-07-24 19:54:47.693857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.506 [2024-07-24 19:54:47.694099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.506 [2024-07-24 19:54:47.694121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.506 [2024-07-24 19:54:47.694137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.506 [2024-07-24 19:54:47.697716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.506 [2024-07-24 19:54:47.707009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.506 [2024-07-24 19:54:47.707400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.506 [2024-07-24 19:54:47.707431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.506 [2024-07-24 19:54:47.707449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.506 [2024-07-24 19:54:47.707693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.507 [2024-07-24 19:54:47.707935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.507 [2024-07-24 19:54:47.707958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.507 [2024-07-24 19:54:47.707973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.507 [2024-07-24 19:54:47.711555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.507 [2024-07-24 19:54:47.720851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.507 [2024-07-24 19:54:47.721259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.507 [2024-07-24 19:54:47.721290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.507 [2024-07-24 19:54:47.721307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.507 [2024-07-24 19:54:47.721546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.507 [2024-07-24 19:54:47.721787] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.507 [2024-07-24 19:54:47.721810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.507 [2024-07-24 19:54:47.721824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.507 [2024-07-24 19:54:47.725404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.507 [2024-07-24 19:54:47.734701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.507 [2024-07-24 19:54:47.735121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.507 [2024-07-24 19:54:47.735153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.507 [2024-07-24 19:54:47.735170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.507 [2024-07-24 19:54:47.735418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.507 [2024-07-24 19:54:47.735661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.507 [2024-07-24 19:54:47.735683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.507 [2024-07-24 19:54:47.735698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.507 [2024-07-24 19:54:47.739273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.507 [2024-07-24 19:54:47.748574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.507 [2024-07-24 19:54:47.748959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.507 [2024-07-24 19:54:47.748989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.507 [2024-07-24 19:54:47.749006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.507 [2024-07-24 19:54:47.749254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.507 [2024-07-24 19:54:47.749496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.507 [2024-07-24 19:54:47.749519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.507 [2024-07-24 19:54:47.749534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.507 [2024-07-24 19:54:47.753113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.507 [2024-07-24 19:54:47.762426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.507 [2024-07-24 19:54:47.762839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.507 [2024-07-24 19:54:47.762870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.507 [2024-07-24 19:54:47.762888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.507 [2024-07-24 19:54:47.763135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.507 [2024-07-24 19:54:47.763388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.507 [2024-07-24 19:54:47.763413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.507 [2024-07-24 19:54:47.763428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.507 [2024-07-24 19:54:47.767001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.507 [2024-07-24 19:54:47.776322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.507 [2024-07-24 19:54:47.776737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.507 [2024-07-24 19:54:47.776769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.507 [2024-07-24 19:54:47.776786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.507 [2024-07-24 19:54:47.777024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.507 [2024-07-24 19:54:47.777278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.507 [2024-07-24 19:54:47.777301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.507 [2024-07-24 19:54:47.777316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.507 [2024-07-24 19:54:47.780903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.507 [2024-07-24 19:54:47.790221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.507 [2024-07-24 19:54:47.790614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.507 [2024-07-24 19:54:47.790646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.507 [2024-07-24 19:54:47.790664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.507 [2024-07-24 19:54:47.790902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.507 [2024-07-24 19:54:47.791146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.507 [2024-07-24 19:54:47.791169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.507 [2024-07-24 19:54:47.791183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.507 [2024-07-24 19:54:47.794761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.507 [2024-07-24 19:54:47.804089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.507 [2024-07-24 19:54:47.804509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.507 [2024-07-24 19:54:47.804540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.507 [2024-07-24 19:54:47.804563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.507 [2024-07-24 19:54:47.804802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.507 [2024-07-24 19:54:47.805044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.507 [2024-07-24 19:54:47.805066] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.507 [2024-07-24 19:54:47.805081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.507 [2024-07-24 19:54:47.808667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.507 [2024-07-24 19:54:47.817988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.507 [2024-07-24 19:54:47.818405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.507 [2024-07-24 19:54:47.818436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.507 [2024-07-24 19:54:47.818454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.507 [2024-07-24 19:54:47.818691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.507 [2024-07-24 19:54:47.818933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.507 [2024-07-24 19:54:47.818956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.507 [2024-07-24 19:54:47.818971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.507 [2024-07-24 19:54:47.822560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.507 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1280966 Killed "${NVMF_APP[@]}" "$@" 00:25:30.507 19:54:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:30.507 19:54:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:30.507 19:54:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:25:30.507 19:54:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@725 -- # xtrace_disable 00:25:30.507 19:54:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.507 [2024-07-24 19:54:47.831876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.507 [2024-07-24 19:54:47.832272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.507 [2024-07-24 19:54:47.832304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.507 [2024-07-24 19:54:47.832322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.508 [2024-07-24 19:54:47.832560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.508 [2024-07-24 19:54:47.832802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.508 [2024-07-24 19:54:47.832825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.508 [2024-07-24 19:54:47.832840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.508 19:54:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@485 -- # nvmfpid=1281982 00:25:30.508 19:54:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:30.508 19:54:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@486 -- # waitforlisten 1281982 00:25:30.508 19:54:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@832 -- # '[' -z 1281982 ']' 00:25:30.508 19:54:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.508 19:54:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local max_retries=100 00:25:30.508 19:54:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.508 19:54:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@841 -- # xtrace_disable 00:25:30.508 19:54:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:30.508 [2024-07-24 19:54:47.836423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.508 [2024-07-24 19:54:47.845724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.508 [2024-07-24 19:54:47.846133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.508 [2024-07-24 19:54:47.846165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.508 [2024-07-24 19:54:47.846183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.508 [2024-07-24 19:54:47.846429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.508 [2024-07-24 19:54:47.846672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.508 [2024-07-24 19:54:47.846695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.508 [2024-07-24 19:54:47.846710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.508 [2024-07-24 19:54:47.850288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.508 [2024-07-24 19:54:47.859589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.508 [2024-07-24 19:54:47.859997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.508 [2024-07-24 19:54:47.860028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.508 [2024-07-24 19:54:47.860045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.508 [2024-07-24 19:54:47.860292] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.508 [2024-07-24 19:54:47.860534] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.508 [2024-07-24 19:54:47.860557] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.508 [2024-07-24 19:54:47.860572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.508 [2024-07-24 19:54:47.864156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.508 [2024-07-24 19:54:47.873471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.508 [2024-07-24 19:54:47.873859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.508 [2024-07-24 19:54:47.873890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.508 [2024-07-24 19:54:47.873907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.508 [2024-07-24 19:54:47.874146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.508 [2024-07-24 19:54:47.874406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.508 [2024-07-24 19:54:47.874430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.508 [2024-07-24 19:54:47.874445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.508 [2024-07-24 19:54:47.878023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.768 [2024-07-24 19:54:47.886363] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:25:30.768 [2024-07-24 19:54:47.886435] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.768 [2024-07-24 19:54:47.887360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.768 [2024-07-24 19:54:47.887756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.768 [2024-07-24 19:54:47.887787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.768 [2024-07-24 19:54:47.887805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.768 [2024-07-24 19:54:47.888044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.768 [2024-07-24 19:54:47.888305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.768 [2024-07-24 19:54:47.888330] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.768 [2024-07-24 19:54:47.888346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.768 [2024-07-24 19:54:47.892085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.768 [2024-07-24 19:54:47.901432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.768 [2024-07-24 19:54:47.901849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.768 [2024-07-24 19:54:47.901881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.768 [2024-07-24 19:54:47.901899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.768 [2024-07-24 19:54:47.902138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.768 [2024-07-24 19:54:47.902388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.768 [2024-07-24 19:54:47.902412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.768 [2024-07-24 19:54:47.902427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.768 [2024-07-24 19:54:47.905995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.768 [2024-07-24 19:54:47.915307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.768 [2024-07-24 19:54:47.915722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.768 [2024-07-24 19:54:47.915752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.768 [2024-07-24 19:54:47.915770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.768 [2024-07-24 19:54:47.916008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.768 [2024-07-24 19:54:47.916259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.768 [2024-07-24 19:54:47.916288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.768 [2024-07-24 19:54:47.916303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.768 [2024-07-24 19:54:47.919871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.768 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.768 [2024-07-24 19:54:47.929169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.768 [2024-07-24 19:54:47.929548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.768 [2024-07-24 19:54:47.929580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.768 [2024-07-24 19:54:47.929598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.768 [2024-07-24 19:54:47.929836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.768 [2024-07-24 19:54:47.930086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.768 [2024-07-24 19:54:47.930109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.768 [2024-07-24 19:54:47.930123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.768 [2024-07-24 19:54:47.933701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.768 [2024-07-24 19:54:47.943206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.768 [2024-07-24 19:54:47.943596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.768 [2024-07-24 19:54:47.943628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.768 [2024-07-24 19:54:47.943646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.768 [2024-07-24 19:54:47.943884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.768 [2024-07-24 19:54:47.944126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.768 [2024-07-24 19:54:47.944148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.768 [2024-07-24 19:54:47.944164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.768 [2024-07-24 19:54:47.947739] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.768 [2024-07-24 19:54:47.957037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.768 [2024-07-24 19:54:47.957433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.768 [2024-07-24 19:54:47.957464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.768 [2024-07-24 19:54:47.957482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.768 [2024-07-24 19:54:47.957720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.768 [2024-07-24 19:54:47.957962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.768 [2024-07-24 19:54:47.957984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.768 [2024-07-24 19:54:47.957999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.768 [2024-07-24 19:54:47.961582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.768 [2024-07-24 19:54:47.963885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:30.768 [2024-07-24 19:54:47.970917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.768 [2024-07-24 19:54:47.971406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.768 [2024-07-24 19:54:47.971443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.768 [2024-07-24 19:54:47.971463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.768 [2024-07-24 19:54:47.971707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.768 [2024-07-24 19:54:47.971952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.768 [2024-07-24 19:54:47.971975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.768 [2024-07-24 19:54:47.971991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.768 [2024-07-24 19:54:47.975576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.768 [2024-07-24 19:54:47.984893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.768 [2024-07-24 19:54:47.985415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.768 [2024-07-24 19:54:47.985452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.768 [2024-07-24 19:54:47.985472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.768 [2024-07-24 19:54:47.985716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.768 [2024-07-24 19:54:47.985960] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.768 [2024-07-24 19:54:47.985983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.768 [2024-07-24 19:54:47.986000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.768 [2024-07-24 19:54:47.989580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.768 [2024-07-24 19:54:47.998881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.768 [2024-07-24 19:54:47.999301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.768 [2024-07-24 19:54:47.999332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.768 [2024-07-24 19:54:47.999350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.768 [2024-07-24 19:54:47.999588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.768 [2024-07-24 19:54:47.999830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.768 [2024-07-24 19:54:47.999854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.768 [2024-07-24 19:54:47.999869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.768 [2024-07-24 19:54:48.003450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.768 [2024-07-24 19:54:48.012752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.768 [2024-07-24 19:54:48.013181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.768 [2024-07-24 19:54:48.013212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.768 [2024-07-24 19:54:48.013251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.768 [2024-07-24 19:54:48.013494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.768 [2024-07-24 19:54:48.013735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.768 [2024-07-24 19:54:48.013758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.768 [2024-07-24 19:54:48.013772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.768 [2024-07-24 19:54:48.017349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.768 [2024-07-24 19:54:48.026647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.768 [2024-07-24 19:54:48.027063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.768 [2024-07-24 19:54:48.027096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.768 [2024-07-24 19:54:48.027114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.768 [2024-07-24 19:54:48.027364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.768 [2024-07-24 19:54:48.027607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.768 [2024-07-24 19:54:48.027630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.768 [2024-07-24 19:54:48.027645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.768 [2024-07-24 19:54:48.031219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.768 [2024-07-24 19:54:48.040547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.768 [2024-07-24 19:54:48.041119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.768 [2024-07-24 19:54:48.041162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.768 [2024-07-24 19:54:48.041183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.768 [2024-07-24 19:54:48.041440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.768 [2024-07-24 19:54:48.041687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.768 [2024-07-24 19:54:48.041711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.768 [2024-07-24 19:54:48.041728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.768 [2024-07-24 19:54:48.045306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.768 [2024-07-24 19:54:48.054607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.768 [2024-07-24 19:54:48.055006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.768 [2024-07-24 19:54:48.055038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.768 [2024-07-24 19:54:48.055057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.768 [2024-07-24 19:54:48.055306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.768 [2024-07-24 19:54:48.055550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.768 [2024-07-24 19:54:48.055584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.769 [2024-07-24 19:54:48.055601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.769 [2024-07-24 19:54:48.059170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.769 [2024-07-24 19:54:48.068508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.769 [2024-07-24 19:54:48.068886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.769 [2024-07-24 19:54:48.068917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.769 [2024-07-24 19:54:48.068935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.769 [2024-07-24 19:54:48.069173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.769 [2024-07-24 19:54:48.069423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.769 [2024-07-24 19:54:48.069447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.769 [2024-07-24 19:54:48.069462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.769 [2024-07-24 19:54:48.073031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.769 [2024-07-24 19:54:48.082554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.769 [2024-07-24 19:54:48.082984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.769 [2024-07-24 19:54:48.083015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.769 [2024-07-24 19:54:48.083032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.769 [2024-07-24 19:54:48.083279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.769 [2024-07-24 19:54:48.083521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.769 [2024-07-24 19:54:48.083544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.769 [2024-07-24 19:54:48.083559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.769 [2024-07-24 19:54:48.085371] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.769 [2024-07-24 19:54:48.085407] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.769 [2024-07-24 19:54:48.085423] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.769 [2024-07-24 19:54:48.085437] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.769 [2024-07-24 19:54:48.085448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.769 [2024-07-24 19:54:48.085535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:30.769 [2024-07-24 19:54:48.085590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:30.769 [2024-07-24 19:54:48.085593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.769 [2024-07-24 19:54:48.087150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.769 [2024-07-24 19:54:48.096476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.769 [2024-07-24 19:54:48.097043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.769 [2024-07-24 19:54:48.097086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.769 [2024-07-24 19:54:48.097117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.769 [2024-07-24 19:54:48.097376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.769 [2024-07-24 19:54:48.097625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.769 [2024-07-24 19:54:48.097648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.769 [2024-07-24 19:54:48.097665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.769 [2024-07-24 19:54:48.101247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.769 [2024-07-24 19:54:48.110566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.769 [2024-07-24 19:54:48.111079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.769 [2024-07-24 19:54:48.111122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.769 [2024-07-24 19:54:48.111144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.769 [2024-07-24 19:54:48.111405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.769 [2024-07-24 19:54:48.111652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.769 [2024-07-24 19:54:48.111676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.769 [2024-07-24 19:54:48.111695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.769 [2024-07-24 19:54:48.114987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.769 [2024-07-24 19:54:48.124172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.769 [2024-07-24 19:54:48.124758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.769 [2024-07-24 19:54:48.124799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.769 [2024-07-24 19:54:48.124818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.769 [2024-07-24 19:54:48.125054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.769 [2024-07-24 19:54:48.125297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.769 [2024-07-24 19:54:48.125319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.769 [2024-07-24 19:54:48.125335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.769 [2024-07-24 19:54:48.128632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:30.769 [2024-07-24 19:54:48.137650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:30.769 [2024-07-24 19:54:48.138186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:30.769 [2024-07-24 19:54:48.138225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:30.769 [2024-07-24 19:54:48.138255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:30.769 [2024-07-24 19:54:48.138483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:30.769 [2024-07-24 19:54:48.138714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:30.769 [2024-07-24 19:54:48.138744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:30.769 [2024-07-24 19:54:48.138760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:30.769 [2024-07-24 19:54:48.141997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.049 [2024-07-24 19:54:48.151308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.049 [2024-07-24 19:54:48.151803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.049 [2024-07-24 19:54:48.151842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:31.049 [2024-07-24 19:54:48.151861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:31.049 [2024-07-24 19:54:48.152083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:31.049 [2024-07-24 19:54:48.152315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.049 [2024-07-24 19:54:48.152337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.049 [2024-07-24 19:54:48.152353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.049 [2024-07-24 19:54:48.155662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.049 [2024-07-24 19:54:48.164952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.049 [2024-07-24 19:54:48.165444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.049 [2024-07-24 19:54:48.165483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:31.049 [2024-07-24 19:54:48.165502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:31.049 [2024-07-24 19:54:48.165725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:31.049 [2024-07-24 19:54:48.165946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.049 [2024-07-24 19:54:48.165967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.049 [2024-07-24 19:54:48.165983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.049 [2024-07-24 19:54:48.169209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.049 [2024-07-24 19:54:48.178405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.049 [2024-07-24 19:54:48.178780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.049 [2024-07-24 19:54:48.178807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:31.049 [2024-07-24 19:54:48.178823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:31.049 [2024-07-24 19:54:48.179053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:31.049 [2024-07-24 19:54:48.179273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.049 [2024-07-24 19:54:48.179294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.049 [2024-07-24 19:54:48.179308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.049 [2024-07-24 19:54:48.182447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.049 [2024-07-24 19:54:48.192031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.049 [2024-07-24 19:54:48.192410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.049 [2024-07-24 19:54:48.192438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:31.049 [2024-07-24 19:54:48.192454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:31.049 [2024-07-24 19:54:48.192668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:31.049 [2024-07-24 19:54:48.192885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.049 [2024-07-24 19:54:48.192906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.049 [2024-07-24 19:54:48.192919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.049 [2024-07-24 19:54:48.196192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.049 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:25:31.049 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@865 -- # return 0 00:25:31.049 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:25:31.049 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@731 -- # xtrace_disable 00:25:31.049 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:31.049 [2024-07-24 19:54:48.205641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.049 [2024-07-24 19:54:48.206014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.049 [2024-07-24 19:54:48.206042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:31.049 [2024-07-24 19:54:48.206058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:31.049 [2024-07-24 19:54:48.206281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:31.049 [2024-07-24 19:54:48.206500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.049 [2024-07-24 19:54:48.206521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.049 [2024-07-24 19:54:48.206549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.049 [2024-07-24 19:54:48.209788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.049 [2024-07-24 19:54:48.219144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.049 [2024-07-24 19:54:48.219514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.049 [2024-07-24 19:54:48.219543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:31.049 [2024-07-24 19:54:48.219559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:31.049 [2024-07-24 19:54:48.219789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:31.049 [2024-07-24 19:54:48.220001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.049 [2024-07-24 19:54:48.220021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.049 [2024-07-24 19:54:48.220033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.049 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.049 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:31.049 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:31.049 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:31.049 [2024-07-24 19:54:48.223203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.049 [2024-07-24 19:54:48.223957] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.049 [2024-07-24 19:54:48.232770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.049 [2024-07-24 19:54:48.233154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.049 [2024-07-24 19:54:48.233181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:31.049 [2024-07-24 19:54:48.233197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:31.049 [2024-07-24 19:54:48.233421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:31.049 [2024-07-24 19:54:48.233664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.049 [2024-07-24 19:54:48.233684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.049 [2024-07-24 19:54:48.233696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.049 [2024-07-24 19:54:48.236879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.049 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:31.049 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:31.049 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:31.049 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:31.049 [2024-07-24 19:54:48.246449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.049 [2024-07-24 19:54:48.246841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.049 [2024-07-24 19:54:48.246871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:31.049 [2024-07-24 19:54:48.246887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:31.049 [2024-07-24 19:54:48.247118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:31.049 [2024-07-24 19:54:48.247339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.050 [2024-07-24 19:54:48.247359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.050 [2024-07-24 19:54:48.247373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.050 [2024-07-24 19:54:48.250591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.050 [2024-07-24 19:54:48.260034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.050 [2024-07-24 19:54:48.260554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.050 [2024-07-24 19:54:48.260592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:31.050 [2024-07-24 19:54:48.260611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:31.050 [2024-07-24 19:54:48.260850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:31.050 [2024-07-24 19:54:48.261083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.050 [2024-07-24 19:54:48.261112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.050 [2024-07-24 19:54:48.261128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.050 Malloc0 00:25:31.050 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:31.050 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:31.050 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:31.050 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:31.050 [2024-07-24 19:54:48.264386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.050 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:31.050 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:31.050 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:31.050 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:31.050 [2024-07-24 19:54:48.273709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.050 [2024-07-24 19:54:48.274079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.050 [2024-07-24 19:54:48.274107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28ac0 with addr=10.0.0.2, port=4420 00:25:31.050 [2024-07-24 19:54:48.274123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe28ac0 is same with the state(6) to be set 00:25:31.050 [2024-07-24 19:54:48.274345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe28ac0 (9): Bad file descriptor 00:25:31.050 [2024-07-24 19:54:48.274578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:31.050 [2024-07-24 19:54:48.274598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:31.050 [2024-07-24 19:54:48.274612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:31.050 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:31.050 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.050 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:31.050 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:31.050 [2024-07-24 19:54:48.277881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:31.050 [2024-07-24 19:54:48.281269] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.050 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:31.050 19:54:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1281268 00:25:31.050 [2024-07-24 19:54:48.287368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:31.050 [2024-07-24 19:54:48.320269] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:41.022 00:25:41.022 Latency(us) 00:25:41.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:41.022 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:41.022 Verification LBA range: start 0x0 length 0x4000 00:25:41.022 Nvme1n1 : 15.01 6589.23 25.74 8549.28 0.00 8430.56 579.51 18641.35 00:25:41.022 =================================================================================================================== 00:25:41.022 Total : 6589.23 25.74 8549.28 0.00 8430.56 579.51 18641.35 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # nvmfcleanup 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:41.022 rmmod nvme_tcp 00:25:41.022 rmmod nvme_fabrics 00:25:41.022 rmmod nvme_keyring 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # '[' -n 1281982 ']' 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@494 -- # killprocess 1281982 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' -z 1281982 ']' 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # kill -0 1281982 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # uname 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1281982 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1281982' 00:25:41.022 killing process with pid 1281982 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # kill 1281982 00:25:41.022 19:54:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@975 -- # wait 1281982 00:25:41.022 19:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:25:41.022 19:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:25:41.022 19:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:25:41.022 19:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:41.022 19:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@282 -- # remove_spdk_ns 00:25:41.022 19:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.022 19:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:41.022 19:54:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:25:42.925 00:25:42.925 real 0m23.299s 00:25:42.925 user 1m3.378s 00:25:42.925 sys 0m4.195s 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # xtrace_disable 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:42.925 ************************************ 00:25:42.925 END TEST nvmf_bdevperf 00:25:42.925 ************************************ 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1108 -- # xtrace_disable 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.925 ************************************ 00:25:42.925 START TEST nvmf_target_disconnect 00:25:42.925 ************************************ 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:42.925 * Looking for test storage... 00:25:42.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.925 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:42.926 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@452 -- # prepare_net_devs 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # local -g is_hw=no 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # remove_spdk_ns 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # xtrace_disable 00:25:42.926 19:55:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:45.456 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.456 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # pci_devs=() 00:25:45.456 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -a pci_devs 00:25:45.456 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # pci_net_devs=() 00:25:45.456 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:25:45.456 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # pci_drivers=() 00:25:45.456 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -A pci_drivers 00:25:45.456 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@299 -- # net_devs=() 00:25:45.456 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@299 -- # local -ga net_devs 00:25:45.456 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@300 -- # e810=() 00:25:45.456 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@300 -- # local -ga e810 00:25:45.456 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # x722=() 00:25:45.456 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # local -ga x722 00:25:45.456 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # mlx=() 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # local -ga mlx 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:45.457 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:45.457 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # [[ up == up ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:45.457 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # [[ up == up ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:45.457 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # is_hw=yes 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:25:45.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:25:45.457 00:25:45.457 --- 10.0.0.2 ping statistics --- 00:25:45.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.457 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:25:45.457 00:25:45.457 --- 10.0.0.1 ping statistics --- 00:25:45.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.457 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@426 -- # return 0 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1108 -- # xtrace_disable 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:45.457 ************************************ 00:25:45.457 START TEST nvmf_target_disconnect_tc1 00:25:45.457 ************************************ 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # nvmf_target_disconnect_tc1 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # local es=0 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@639 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:45.457 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:45.458 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.458 [2024-07-24 19:55:02.529341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:45.458 [2024-07-24 19:55:02.529402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15201a0 with addr=10.0.0.2, port=4420 00:25:45.458 [2024-07-24 19:55:02.529432] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:45.458 [2024-07-24 19:55:02.529456] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:45.458 [2024-07-24 19:55:02.529470] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:45.458 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:45.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:45.458 Initializing NVMe Controllers 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # es=1 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:25:45.458 00:25:45.458 real 0m0.093s 00:25:45.458 user 0m0.040s 00:25:45.458 sys 0m0.053s 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # xtrace_disable 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:45.458 ************************************ 00:25:45.458 END TEST nvmf_target_disconnect_tc1 00:25:45.458 ************************************ 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1108 -- # xtrace_disable 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:45.458 ************************************ 00:25:45.458 START TEST nvmf_target_disconnect_tc2 00:25:45.458 ************************************ 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # nvmf_target_disconnect_tc2 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@725 -- # xtrace_disable 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@485 -- # nvmfpid=1285152 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@486 -- # waitforlisten 1285152 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # '[' -z 1285152 ']' 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local max_retries=100 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@841 -- # xtrace_disable 00:25:45.458 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.458 [2024-07-24 19:55:02.637941] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:25:45.458 [2024-07-24 19:55:02.638018] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.458 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.458 [2024-07-24 19:55:02.701181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:45.458 [2024-07-24 19:55:02.812149] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.458 [2024-07-24 19:55:02.812210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.458 [2024-07-24 19:55:02.812223] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.458 [2024-07-24 19:55:02.812257] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.458 [2024-07-24 19:55:02.812268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.458 [2024-07-24 19:55:02.812371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:45.458 [2024-07-24 19:55:02.812430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:45.458 [2024-07-24 19:55:02.812494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:45.458 [2024-07-24 19:55:02.812498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:45.716 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:25:45.716 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@865 -- # return 0 00:25:45.716 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:25:45.716 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@731 -- # xtrace_disable 00:25:45.716 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.716 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.716 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:45.716 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:45.716 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.716 Malloc0 00:25:45.716 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:45.716 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:45.716 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:45.716 19:55:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.716 [2024-07-24 19:55:02.998021] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.716 [2024-07-24 19:55:03.026335] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1285230 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:45.716 19:55:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:45.716 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.275 19:55:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1285152 00:25:48.275 19:55:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:48.275 Read completed with error (sct=0, sc=8) 00:25:48.275 starting I/O failed 00:25:48.275 Read completed with error (sct=0, sc=8) 00:25:48.275 starting I/O failed 00:25:48.275 Read completed with error (sct=0, sc=8) 00:25:48.275 starting I/O failed 00:25:48.275 Read completed with error (sct=0, sc=8) 00:25:48.275 starting I/O failed 00:25:48.275 Read completed with error (sct=0, sc=8) 00:25:48.275 starting I/O failed 00:25:48.275 Read completed with error (sct=0, sc=8) 00:25:48.275 starting I/O failed 00:25:48.275 Read completed with error (sct=0, sc=8) 00:25:48.275 starting I/O failed 00:25:48.275 Read completed with error (sct=0, sc=8) 00:25:48.275 starting I/O failed 00:25:48.275 Read completed with error (sct=0, sc=8) 00:25:48.275 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 [2024-07-24 19:55:05.052061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 [2024-07-24 19:55:05.052385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Read completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.276 Write completed with error (sct=0, sc=8) 00:25:48.276 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 [2024-07-24 19:55:05.052724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Read completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 Write completed with error (sct=0, sc=8) 00:25:48.277 starting I/O failed 00:25:48.277 [2024-07-24 19:55:05.052994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:48.277 [2024-07-24 19:55:05.053224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.277 [2024-07-24 19:55:05.053288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.277 qpair failed and we were unable to recover it. 00:25:48.277 [2024-07-24 19:55:05.053416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.277 [2024-07-24 19:55:05.053444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.277 qpair failed and we were unable to recover it. 00:25:48.277 [2024-07-24 19:55:05.053559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.277 [2024-07-24 19:55:05.053585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.277 qpair failed and we were unable to recover it. 00:25:48.277 [2024-07-24 19:55:05.053729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.277 [2024-07-24 19:55:05.053755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.277 qpair failed and we were unable to recover it. 00:25:48.277 [2024-07-24 19:55:05.053921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.277 [2024-07-24 19:55:05.053948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.277 qpair failed and we were unable to recover it. 00:25:48.277 [2024-07-24 19:55:05.054116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.277 [2024-07-24 19:55:05.054148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.277 qpair failed and we were unable to recover it. 00:25:48.277 [2024-07-24 19:55:05.054267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.277 [2024-07-24 19:55:05.054294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.277 qpair failed and we were unable to recover it. 00:25:48.277 [2024-07-24 19:55:05.054444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.277 [2024-07-24 19:55:05.054470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.277 qpair failed and we were unable to recover it. 00:25:48.277 [2024-07-24 19:55:05.054598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.277 [2024-07-24 19:55:05.054624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.277 qpair failed and we were unable to recover it. 00:25:48.277 [2024-07-24 19:55:05.054760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.277 [2024-07-24 19:55:05.054787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.277 qpair failed and we were unable to recover it. 00:25:48.277 [2024-07-24 19:55:05.054942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.277 [2024-07-24 19:55:05.054969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.055097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.055123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.055295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.055322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.055433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.055460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.055620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.055646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.055836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.055876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.056025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.056068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.056188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.056217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.056384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.056422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.056583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.056611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.056764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.056791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.056937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.056963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.057124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.057154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.057291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.057319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.057424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.057450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.057561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.057588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.057682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.057708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.057887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.057913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.058132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.058161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.058301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.058328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.058432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.058459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.058703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.058729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.058945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.058992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.059148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.059178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.278 [2024-07-24 19:55:05.059358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.278 [2024-07-24 19:55:05.059386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.278 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.059532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.059559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.059722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.059749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.059933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.059959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.060142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.060170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.060333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.060374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.060510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.060539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.060696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.060723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.060936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.060969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.061143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.061185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.061310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.061339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.061458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.061507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.061632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.061659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.061872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.061919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.062060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.062089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.062219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.062252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.062361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.062387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.062502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.062531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.062648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.062675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.062806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.062833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.063017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.063064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.063238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.063269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.063382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.063408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.063512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.063539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.063639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.063665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.063804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.063845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.064020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.064049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.064200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.064226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.064708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.064749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.064908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.064952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.065142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.279 [2024-07-24 19:55:05.065172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.279 qpair failed and we were unable to recover it. 00:25:48.279 [2024-07-24 19:55:05.065334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.065361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.065469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.065496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.065602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.065630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.065791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.065820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.065950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.065974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.066109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.066139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.066271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.066324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.066466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.066517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.066710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.066739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.066890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.066921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.067117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.067168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.067325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.067353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.067488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.067519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.067678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.067709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.067835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.067877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.068009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.068037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.068173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.068199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.068354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.068399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.068513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.068556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.068702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.068730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.068938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.069000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.069294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.069321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.069453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.069479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.069627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.069654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.069838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.069865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.070067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.070094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.070207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.070233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.070386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.070412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.070517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.070543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.070675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.070701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.070838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.280 [2024-07-24 19:55:05.070862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.280 qpair failed and we were unable to recover it. 00:25:48.280 [2024-07-24 19:55:05.071021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.071048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.071187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.071214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.071344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.071371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.071508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.071536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.071670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.071696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.071832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.071858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.072020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.072047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.072154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.072179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.073012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.073054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.073202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.073232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.073383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.073418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.073518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.073544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.073652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.073678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.073828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.073862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.073998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.074025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.074157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.074184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.074323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.074354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.074470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.074499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.074627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.074653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.074787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.074814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.075027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.075054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.075159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.075184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.075337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.075379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.075520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.075547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.075677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.075704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.075905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.075933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.076065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.076091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.076264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.076305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.076442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.076469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.076604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.076631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.076803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.076830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.076968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.076995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.281 qpair failed and we were unable to recover it. 00:25:48.281 [2024-07-24 19:55:05.077128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.281 [2024-07-24 19:55:05.077158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.077309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.077351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.077506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.077537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.077672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.077700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.077835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.077862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.077974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.078001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.078129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.078168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.078326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.078368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.078484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.078512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.078673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.078718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.078921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.078965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.079104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.079132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.079279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.079307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.079443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.079470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.079597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.079624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.079756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.079783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.079936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.079963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.080072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.080098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.080234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.080266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.080399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.080424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.080577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.080621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.080724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.080750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.080907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.080934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.081074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.081102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.081233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.081270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.081377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.081403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.081553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.081594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.081738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.081767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.081871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.081897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.082027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.082053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.082190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.082218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.082354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.082380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.282 [2024-07-24 19:55:05.082516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.282 [2024-07-24 19:55:05.082545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.282 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.082673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.082698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.082825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.082851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.082981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.083007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.083113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.083138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.083275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.083301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.083440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.083466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.083591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.083617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.083751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.083780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.083919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.083947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.084083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.084110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.084282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.084323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.084436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.084464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.084632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.084660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.084796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.084823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.084966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.084994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.085123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.085149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.085285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.085313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.085456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.085485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.085637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.085678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.085927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.085955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.086057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.086084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.086216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.086251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.086389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.086414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.086576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.086603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.086743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.086769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.086913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.086942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.087105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.087132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.087268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.087295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.087430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.087457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.087587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.087612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.283 qpair failed and we were unable to recover it. 00:25:48.283 [2024-07-24 19:55:05.087713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.283 [2024-07-24 19:55:05.087738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.087876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.087905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.088051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.088078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.088212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.088239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.088383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.088409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.088540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.088568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.088703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.088731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.088888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.088916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.089073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.089101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.089221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.089266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.089383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.089410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.089590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.089621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.089782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.089809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.089968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.089996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.090104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.090129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.090296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.090324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.090484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.090511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.090618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.090644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.090751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.090778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.090950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.090993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.091101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.091127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.091233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.284 [2024-07-24 19:55:05.091266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.284 qpair failed and we were unable to recover it. 00:25:48.284 [2024-07-24 19:55:05.091403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.091428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.091543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.091568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.091677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.091702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.091810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.091835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.091940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.091964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.092122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.092149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.092303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.092347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.092474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.092515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.092632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.092659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.092796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.092824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.092956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.092983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.093092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.093121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.093288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.093316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.093446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.093472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.093604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.093632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.093791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.093818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.094002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.094032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.094181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.094212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.094376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.094403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.094543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.094576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.094826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.094877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.095109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.095139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.095310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.095338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.095478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.095506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.095624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.095653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.095795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.095825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.096006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.096034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.096142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.096180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.096336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.096378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.096506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.285 [2024-07-24 19:55:05.096548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.285 qpair failed and we were unable to recover it. 00:25:48.285 [2024-07-24 19:55:05.096751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.096780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.096916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.096944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.097076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.097102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.097300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.097341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.097494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.097535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.097701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.097730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.097864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.097890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.098170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.098221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.098368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.098393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.098549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.098593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.098782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.098843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.098979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.099006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.099111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.099136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.099254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.099280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.099406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.099431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.099543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.099568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.099726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.099753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.099894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.099920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.100047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.100072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.100206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.100255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.100401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.100439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.100581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.100610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.100742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.100770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.100944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.100971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.101086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.101113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.101220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.101253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.101364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.101390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.101517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.101543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.101677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.101704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.101881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.101909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.286 qpair failed and we were unable to recover it. 00:25:48.286 [2024-07-24 19:55:05.102075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.286 [2024-07-24 19:55:05.102103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.102228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.102261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.102367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.102393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.102495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.102520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.102666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.102693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.102835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.102863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.102997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.103024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.103128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.103154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.103296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.103324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.103456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.103483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.103579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.103625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.103728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.103756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.103924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.103954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.104133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.104165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.104299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.104325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.104467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.104495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.104639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.104683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.104839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.104866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.105030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.105059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.105210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.105238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.105351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.105377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.105502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.105527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.105688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.105717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.105860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.105890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.106061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.106091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.106248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.106275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.106377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.106402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.106511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.106537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.106643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.106669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.106776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.106802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.106957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.106983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.107107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.107134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.107274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.107329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.287 [2024-07-24 19:55:05.107475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.287 [2024-07-24 19:55:05.107504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.287 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.107679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.107706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.107839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.107882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.108031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.108061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.108249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.108277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.108379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.108405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.108506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.108548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.108668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.108694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.108803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.108828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.109020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.109047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.109204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.109231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.109377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.109404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.109564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.109605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.109802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.109831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.110003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.110071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.110215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.110251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.110382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.110408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.110535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.110562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.110722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.110749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.110884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.110912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.111040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.111066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.111230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.111262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.111376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.111401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.111506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.111532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.111679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.111709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.111923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.111954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.112069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.112097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.112268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.112313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.112448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.112473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.112603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.112649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.112849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.112908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.288 qpair failed and we were unable to recover it. 00:25:48.288 [2024-07-24 19:55:05.113159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.288 [2024-07-24 19:55:05.113186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.113317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.113343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.113471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.113496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.113608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.113634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.113770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.113798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.113925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.113954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.114122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.114152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.114283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.114309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.114418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.114443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.114589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.114614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.114751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.114778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.114934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.114962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.115116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.115143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.115256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.115283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.115390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.115416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.115562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.115588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.115715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.115746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.115895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.115922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.116050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.116091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.116224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.116258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.116415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.116443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.116578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.116605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.116736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.116761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.116888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.116915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.117050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.117078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.117206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.117231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.117410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.117438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.289 qpair failed and we were unable to recover it. 00:25:48.289 [2024-07-24 19:55:05.117595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.289 [2024-07-24 19:55:05.117623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.290 qpair failed and we were unable to recover it. 00:25:48.290 [2024-07-24 19:55:05.117730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.290 [2024-07-24 19:55:05.117774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.290 qpair failed and we were unable to recover it. 00:25:48.290 [2024-07-24 19:55:05.117957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.290 [2024-07-24 19:55:05.117985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.290 qpair failed and we were unable to recover it. 00:25:48.290 [2024-07-24 19:55:05.118123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.290 [2024-07-24 19:55:05.118151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.290 qpair failed and we were unable to recover it. 00:25:48.290 [2024-07-24 19:55:05.118289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.290 [2024-07-24 19:55:05.118315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.290 qpair failed and we were unable to recover it. 00:25:48.290 [2024-07-24 19:55:05.118447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.290 [2024-07-24 19:55:05.118475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.290 qpair failed and we were unable to recover it. 00:25:48.290 [2024-07-24 19:55:05.118602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.290 [2024-07-24 19:55:05.118628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.290 qpair failed and we were unable to recover it. 00:25:48.290 [2024-07-24 19:55:05.118730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.290 [2024-07-24 19:55:05.118756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.290 qpair failed and we were unable to recover it. 00:25:48.290 [2024-07-24 19:55:05.118947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.290 [2024-07-24 19:55:05.118974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.290 qpair failed and we were unable to recover it. 00:25:48.290 [2024-07-24 19:55:05.119078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.290 [2024-07-24 19:55:05.119105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.290 qpair failed and we were unable to recover it. 00:25:48.290 [2024-07-24 19:55:05.119261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.290 [2024-07-24 19:55:05.119289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.290 qpair failed and we were unable to recover it. 00:25:48.290 [2024-07-24 19:55:05.119428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.290 [2024-07-24 19:55:05.119455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.290 qpair failed and we were unable to recover it. 00:25:48.290 [2024-07-24 19:55:05.119557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.290 [2024-07-24 19:55:05.119583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.290 qpair failed and we were unable to recover it. 00:25:48.290 [2024-07-24 19:55:05.119741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.290 [2024-07-24 19:55:05.119769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.290 qpair failed and we were unable to recover it. 00:25:48.290 [2024-07-24 19:55:05.119921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.290 [2024-07-24 19:55:05.119951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.290 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.120134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.120162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.120282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.120309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.120466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.120493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.120629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.120658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.120848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.120917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.121069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.121097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.121203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.121228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.121375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.121401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.121562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.121589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.121729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.121757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.121897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.121926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.122050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.122078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.122258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.122286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.122417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.122443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.122591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.122626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.122844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.122873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.123064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.123091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.123249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.123276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.123417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.123444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.123659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.123689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.123839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.123869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.124027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.124054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.124189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.124233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.124381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.124409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.124545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.124573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.124724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.124755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.124870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.124900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.125054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.125082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.125226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.125280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.125422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.125463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.125565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.125591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.291 qpair failed and we were unable to recover it. 00:25:48.291 [2024-07-24 19:55:05.125732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.291 [2024-07-24 19:55:05.125759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.125861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.125887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.126018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.126044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.126174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.126201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.126358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.126399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.126540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.126568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.126669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.126696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.126885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.126914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.127031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.127056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.127184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.127209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.127338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.127366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.127502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.127529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.127752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.127806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.127973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.128000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.128106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.128131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.128267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.128294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.128432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.128460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.128588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.128614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.128746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.128773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.128896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.128924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.129032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.129057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.129185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.129212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.129358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.129383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.129522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.129549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.129674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.129701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.129804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.129830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.129963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.129991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.130146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.130174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.130299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.130325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.130428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.130454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.130582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.130608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.130760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.130790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.292 [2024-07-24 19:55:05.130946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.292 [2024-07-24 19:55:05.130974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.292 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.131106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.131134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.131255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.131294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.131436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.131465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.131621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.131651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.131778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.131815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.131978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.132006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.132167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.132194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.132330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.132356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.132462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.132488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.132583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.132608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.132744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.132774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.132928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.132955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.133092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.133135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.133282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.133329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.133434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.133460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.133567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.133601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.133728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.133753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.133867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.133892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.134015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.134043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.134234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.134269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.134439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.134467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.134573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.134598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.134733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.134760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.134893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.134920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.135052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.135079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.135204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.135232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.135372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.135400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.135538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.135565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.135683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.135715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.135899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.135926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.136059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.293 [2024-07-24 19:55:05.136086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.293 qpair failed and we were unable to recover it. 00:25:48.293 [2024-07-24 19:55:05.136247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.136279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.136410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.136437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.136563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.136589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.136759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.136786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.136900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.136926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.137032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.137057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.137164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.137191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.137324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.137352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.137457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.137483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.137608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.137635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.137777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.137804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.137911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.137936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.138105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.138131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.138286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.138314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.138432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.138458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.138595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.138622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.138750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.138775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.138931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.138958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.139121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.139150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.139303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.139331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.139447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.139472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.139595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.139622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.139781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.139807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.139957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.139987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.140144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.140170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.140266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.140292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.140427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.140452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.140596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.140625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.140751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.140777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.140934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.140962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.141121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.141151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.141303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.294 [2024-07-24 19:55:05.141330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.294 qpair failed and we were unable to recover it. 00:25:48.294 [2024-07-24 19:55:05.141444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.141470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.141573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.141600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.141708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.141733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.141846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.141872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.142004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.142032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.142187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.142214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.142335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.142362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.142539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.142569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.142718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.142745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.142914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.142946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.143110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.143136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.143303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.143328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.143458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.143485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.143637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.143667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.143813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.143839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.144000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.144027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.144154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.144180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.144358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.144385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.144518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.144561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.144718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.144745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.144902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.144928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.145053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.145078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.145206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.145233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.145348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.145374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.145503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.145530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.145718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.145747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.145861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.145886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.145988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.146013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.146195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.295 [2024-07-24 19:55:05.146224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.295 qpair failed and we were unable to recover it. 00:25:48.295 [2024-07-24 19:55:05.146380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.146407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.146543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.146570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.146701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.146729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.146832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.146857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.146991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.147017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.147177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.147204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.147347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.147373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.147486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.147517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.147649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.147676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.147804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.147830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.147936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.147962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.148074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.148101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.148207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.148233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.148346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.148371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.148469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.148494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.148590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.148615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.148723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.148748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.148846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.148871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.148974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.148999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.149102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.149128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.149231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.149270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.149393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.149432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.149549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.149583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.149744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.149790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.149960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.149987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.150098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.150126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.150258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.150284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.150414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.150443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.150571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.150598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.150696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.150721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.296 [2024-07-24 19:55:05.150853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.296 [2024-07-24 19:55:05.150880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.296 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.151009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.151036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.151188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.151218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.151392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.151432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.151584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.151618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.151777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.151804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.151941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.151968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.152102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.152130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.152238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.152271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.152431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.152457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.152564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.152590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.152724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.152752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.152888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.152920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.153102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.153132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.153298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.153327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.153468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.153495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.153631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.153658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.153763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.153788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.153919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.153945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.154054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.154080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.154214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.154247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.154362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.154390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.154531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.154560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.154752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.154779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.154948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.154977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.155109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.155137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.155295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.155323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.155490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.155516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.155646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.155673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.155825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.155869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.155978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.156003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.156114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.297 [2024-07-24 19:55:05.156146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.297 qpair failed and we were unable to recover it. 00:25:48.297 [2024-07-24 19:55:05.156286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.156313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.156451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.156478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.156619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.156645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.156777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.156803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.156962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.156989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.157115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.157141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.157301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.157327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.157451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.157482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.157653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.157683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.157838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.157864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.157994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.158021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.158148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.158175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.158327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.158371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.158530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.158575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.158721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.158762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.158871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.158897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.159029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.159056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.159191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.159216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.159349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.159395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.159548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.159594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.159760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.159807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.159948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.159974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.160110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.160136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.160305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.160350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.160502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.160544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.160694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.160737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.160874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.160901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.161032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.161059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.161194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.161221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.161346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.161373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.161508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.161534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.298 [2024-07-24 19:55:05.161668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.298 [2024-07-24 19:55:05.161695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.298 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.161853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.161880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.162020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.162046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.162180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.162206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.162372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.162399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.162534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.162561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.162729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.162756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.162912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.162939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.163073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.163104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.163251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.163278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.163382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.163407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.163574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.163600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.163756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.163782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.163942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.163969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.164126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.164152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.164313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.164339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.164471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.164497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.164633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.164659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.164829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.164857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.165021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.165047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.165156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.165182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.165292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.165320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.165434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.165460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.165592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.165620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.165723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.165748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.165881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.165908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.166069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.166095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.166225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.166257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.166394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.166420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.166548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.166574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.166754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.166784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.166942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.166968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.299 qpair failed and we were unable to recover it. 00:25:48.299 [2024-07-24 19:55:05.167126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.299 [2024-07-24 19:55:05.167152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.167283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.167310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.167444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.167471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.167632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.167658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.167820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.167847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.167982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.168009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.168135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.168161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.168285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.168314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.168457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.168483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.168593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.168621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.168780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.168807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.168936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.168963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.169068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.169093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.169232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.169269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.169403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.169428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.169556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.169583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.169689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.169725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.169863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.169890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.170019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.170045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.170178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.170205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.170353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.170380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.170488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.170512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.170619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.170646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.170805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.170831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.170967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.170992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.171128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.171155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.171312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.171338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.171475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.171501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.171663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.171690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.171846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.171872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.172038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.172065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.172192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.300 [2024-07-24 19:55:05.172219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.300 qpair failed and we were unable to recover it. 00:25:48.300 [2024-07-24 19:55:05.172358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.172403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.172556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.172586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.172751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.172795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.172907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.172934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.173070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.173096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.173265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.173293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.173427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.173454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.173580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.173606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.173742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.173768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.173879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.173905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.174068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.174093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.174254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.174281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.174405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.174431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.174590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.174616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.174749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.174775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.174935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.174961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.175119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.175145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.175303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.175330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.175499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.175525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.175659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.175685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.175823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.175849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.176013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.176039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.176170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.176196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.176376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.176420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.176567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.176619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.176764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.176808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.176967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.176993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.301 qpair failed and we were unable to recover it. 00:25:48.301 [2024-07-24 19:55:05.177149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.301 [2024-07-24 19:55:05.177175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.177304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.177331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.177456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.177482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.177614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.177641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.177744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.177771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.177899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.177926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.178056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.178084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.178218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.178252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.178386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.178413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.178520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.178547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.178657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.178684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.178826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.178854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.179021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.179048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.179185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.179211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.179397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.179442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.179619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.179664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.179846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.179891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.180051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.180077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.180209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.180235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.180366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.180410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.180533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.180562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.180710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.180756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.180918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.180972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.181133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.181160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.181289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.181316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.181425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.181453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.181562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.181589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.181720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.181747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.181884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.181910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.182056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.182082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.182238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.182270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.182369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.182395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.182530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.302 [2024-07-24 19:55:05.182557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.302 qpair failed and we were unable to recover it. 00:25:48.302 [2024-07-24 19:55:05.182651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.182676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.182811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.182838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.182947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.182974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.183139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.183165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.183293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.183319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.183454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.183481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.183623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.183650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.183812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.183838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.183973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.183999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.184137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.184164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.184267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.184293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.184441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.184486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.184667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.184695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.184865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.184891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.184996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.185024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.185157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.185184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.185327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.185358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.185531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.185558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.185720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.185747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.185889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.185915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.186041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.186068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.186180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.186208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.186356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.186400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.186565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.186592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.186717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.186743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.186845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.186873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.187007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.187035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.187203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.187229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.187343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.187369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.187526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.187554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.303 [2024-07-24 19:55:05.187725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.303 [2024-07-24 19:55:05.187751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.303 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.187907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.187937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.188099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.188126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.188311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.188357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.188485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.188530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.188675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.188718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.188813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.188838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.188997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.189023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.189181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.189208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.189342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.189369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.189509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.189535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.189692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.189717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.189852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.189878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.190012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.190037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.190164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.190190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.190321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.190365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.190521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.190547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.190679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.190706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.190815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.190839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.190999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.191025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.191160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.191186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.191359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.191389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.191561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.191605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.191765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.191793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.191926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.191953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.192086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.192112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.192213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.192238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.192423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.192466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.192622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.192666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.192775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.192802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.304 [2024-07-24 19:55:05.192914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.304 [2024-07-24 19:55:05.192939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.304 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.193042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.193069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.193263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.193290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.193421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.193449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.193583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.193610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.193771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.193798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.193929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.193956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.194062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.194089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.194223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.194256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.194359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.194385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.194542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.194569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.194695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.194725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.194860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.194888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.195044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.195071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.195202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.195229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.195344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.195370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.195473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.195500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.195663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.195690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.195845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.195872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.196006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.196034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.196186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.196226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.196373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.196402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.196536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.196580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.196741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.196771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.196916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.196947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.197099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.197129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.197273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.197300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.197431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.197458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.197593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.197620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.197778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.197807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.197964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.197990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.198097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.198124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.305 qpair failed and we were unable to recover it. 00:25:48.305 [2024-07-24 19:55:05.198261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.305 [2024-07-24 19:55:05.198289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.198391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.198416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.198522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.198566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.198733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.198762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.198936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.198965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.199134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.199163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.199321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.199353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.199469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.199495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.199590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.199615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.199711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.199737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.199874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.199900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.200065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.200094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.200215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.200247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.200384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.200422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.200540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.200567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.200743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.200770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.200876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.200901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.201018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.201045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.201180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.201206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.201335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.201362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.201501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.201529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.201661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.201687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.201813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.201839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.201948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.201973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.202108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.202135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.202331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.202358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.202484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.202511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.202635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.202662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.202794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.202820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.306 [2024-07-24 19:55:05.202984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.306 [2024-07-24 19:55:05.203013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.306 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.203149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.203178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.203336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.203363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.203524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.203551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.203713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.203744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.203879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.203907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.204044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.204072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.204231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.204264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.204408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.204435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.204644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.204671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.204823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.204850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.204998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.205027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.205139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.205169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.205296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.205323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.205449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.205475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.205627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.205656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.205818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.205847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.206021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.206050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.206216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.206248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.206375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.206401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.206506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.206532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.206672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.206698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.206857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.206884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.207000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.207030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.207140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.207170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.207357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.207383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.207507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.207549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.207708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.207736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.207865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.207892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.208039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.307 [2024-07-24 19:55:05.208069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.307 qpair failed and we were unable to recover it. 00:25:48.307 [2024-07-24 19:55:05.208217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.208251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.208405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.208431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.208561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.208588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.208721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.208750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.208906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.208936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.209096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.209122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.209223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.209257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.209388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.209415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.209540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.209568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.209705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.209746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.209902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.209932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.210092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.210119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.210278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.210306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.210438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.210464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.210594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.210621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.210778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.210809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.210954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.210994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.211160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.211188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.211301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.211326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.211482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.211507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.211655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.211700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.211817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.211861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.212047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.212091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.212248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.212292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.212441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.212486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.212675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.212705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.212900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.212945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.213082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.213110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.213215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.213257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.213375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.213402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.213501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.308 [2024-07-24 19:55:05.213526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.308 qpair failed and we were unable to recover it. 00:25:48.308 [2024-07-24 19:55:05.213650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.213677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.213827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.213857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.213972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.214001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.214158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.214186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.214317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.214344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.214488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.214532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.214711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.214757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.214899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.214944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.215078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.215104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.215277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.215305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.215439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.215466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.215604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.215634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.215797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.215827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.215976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.216005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.216150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.216180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.216367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.216394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.216505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.216531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.216654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.216680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.216808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.216837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.217012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.217041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.217184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.217215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.217395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.217423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.217580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.217609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.217758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.217787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.217937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.217963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.218101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.218128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.218290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.218318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.218477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.218504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.218655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.218684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.218805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.218835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.219013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.219042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.219155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.219185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.219342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.309 [2024-07-24 19:55:05.219370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.309 qpair failed and we were unable to recover it. 00:25:48.309 [2024-07-24 19:55:05.219481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.219507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.219618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.219644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.219803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.219829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.219956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.219983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.220136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.220165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.220299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.220330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.220465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.220492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.220618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.220661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.220831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.220861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.221064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.221093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.221212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.221276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.221440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.221467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.221589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.221615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.221795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.221824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.221968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.222009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.222187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.222217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.222387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.222424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.222580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.222621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.222749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.222794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.222999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.223029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.223153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.223182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.223361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.223388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.223567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.223597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.223757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.223783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.223934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.223963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.224098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.224127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.224238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.224290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.224426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.224453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.224597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.224626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.224823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.224852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.225084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.310 [2024-07-24 19:55:05.225114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.310 qpair failed and we were unable to recover it. 00:25:48.310 [2024-07-24 19:55:05.225234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.225275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.225427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.225455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.225597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.225623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.225758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.225801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.225918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.225947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.226090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.226119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.226257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.226301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.226399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.226426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.226558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.226585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.226713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.226756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.226906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.226935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.227107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.227137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.227287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.227329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.227441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.227468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.227574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.227600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.227704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.227736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.227896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.227923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.228059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.228088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.228234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.228268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.228394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.228421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.228524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.228550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.228682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.228709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.228867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.228896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.229058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.229087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.229270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.229314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.229425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.229452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.229612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.229639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.229777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.229804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.229907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.229934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.230066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.230092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.230219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.230270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.230387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.230432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.230571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.311 [2024-07-24 19:55:05.230597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.311 qpair failed and we were unable to recover it. 00:25:48.311 [2024-07-24 19:55:05.230729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.230755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.230888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.230932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.231100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.231130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.231298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.231327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.231456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.231485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.231614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.231641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.231798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.231828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.231998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.232027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.232182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.232209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.232344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.232393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.232514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.232544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.232720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.232749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.232902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.232929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.233060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.233102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.233222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.233257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.233394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.233423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.233570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.233597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.233729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.233755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.233873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.233902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.234041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.234071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.234249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.234276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.234410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.234437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.234533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.234559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.234689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.234716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.234846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.234872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.234973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.312 [2024-07-24 19:55:05.234999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.312 qpair failed and we were unable to recover it. 00:25:48.312 [2024-07-24 19:55:05.235123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.235152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.235298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.235329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.235484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.235510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.235645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.235671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.235778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.235805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.235929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.235958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.236138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.236164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.236265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.236291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.236479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.236508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.236632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.236658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.236814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.236840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.237018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.237048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.237184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.237213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.237355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.237382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.237479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.237504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.237630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.237657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.237781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.237811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.237951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.237980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.238134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.238161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.238324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.238351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.238516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.238560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.238698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.238727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.238853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.238880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.239020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.239062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.239231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.239271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.239456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.239483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.239586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.239613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.239738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.239764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.239893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.239923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.240061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.240090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.240248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.240274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.240408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.313 [2024-07-24 19:55:05.240451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.313 qpair failed and we were unable to recover it. 00:25:48.313 [2024-07-24 19:55:05.240620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.240649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.240796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.240825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.240960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.240986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.241121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.241148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.241316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.241347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.241495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.241525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.241687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.241714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.241871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.241912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.242031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.242060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.242178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.242207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.242360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.242387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.242570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.242599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.242760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.242786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.242891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.242917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.243027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.243053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.243182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.243209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.243350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.243378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.243529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.243559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.243733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.243760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.243931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.243965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.244087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.244116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.244285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.244318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.244453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.244478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.244604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.244630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.244805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.244834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.245007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.245036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.245217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.245250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.245431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.245460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.245578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.245607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.245778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.245808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.245992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.246019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.246176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.246205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.246377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.246404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.314 [2024-07-24 19:55:05.246540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.314 [2024-07-24 19:55:05.246567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.314 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.246761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.246788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.246889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.246915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.247070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.247099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.247261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.247292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.247420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.247447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.247595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.247639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.247817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.247846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.248014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.248044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.248229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.248262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.248400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.248427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.248583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.248626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.248751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.248781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.248965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.248991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.249180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.249209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.249360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.249391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.249565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.249594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.249747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.249772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.249874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.249900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.250084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.250113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.250288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.250318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.250477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.250504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.250662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.250705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.250818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.250846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.250990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.251018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.251202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.251229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.251346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.251389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.251543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.251577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.251748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.251777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.251920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.251946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.252093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.252120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.252254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.252293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.252431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.252460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.315 [2024-07-24 19:55:05.252610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.315 [2024-07-24 19:55:05.252639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.315 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.252821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.252850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.253022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.253052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.253198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.253227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.253364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.253391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.253519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.253545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.253671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.253700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.253814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.253843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.254036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.254062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.254201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.254227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.254382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.254409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.254603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.254629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.254784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.254811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.254969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.254995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.255127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.255154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.255343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.255371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.255506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.255543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.255660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.255687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.255816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.255843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.256005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.256036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.256185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.256212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.256395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.256427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.256538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.256565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.256677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.256703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.256801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.256828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.256988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.257014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.257182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.257224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.257376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.257406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.257584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.257611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.257723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.257748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.257878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.257904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.258054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.258084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.258229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.258262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.316 qpair failed and we were unable to recover it. 00:25:48.316 [2024-07-24 19:55:05.258425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.316 [2024-07-24 19:55:05.258452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.258622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.258651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.258795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.258824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.259001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.259027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.259163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.259189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.259302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.259327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.259483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.259523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.259648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.259675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.259813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.259839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.259999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.260030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.260191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.260217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.260340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.260366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.260494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.260523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.260670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.260699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.260840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.260869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.260985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.261012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.261128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.261154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.261291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.261318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.261450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.261480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.261610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.261637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.261794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.261837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.261980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.262010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.262180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.262209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.262367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.262395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.262553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.262597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.262709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.262740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.262922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.262951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.263105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.263132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.263300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.263330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.317 [2024-07-24 19:55:05.263478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.317 [2024-07-24 19:55:05.263524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.317 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.263661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.263691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.263875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.263902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.264015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.264058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.264201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.264230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.264376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.264405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.264531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.264568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.264702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.264729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.264877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.264906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.265043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.265072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.265184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.265209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.265346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.265373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.265484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.265520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.265678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.265707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.265833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.265861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.266022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.266063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.266204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.266233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.266394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.266423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.266595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.266622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.266724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.266767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.266905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.266933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.267035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.267063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.267184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.267231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.267416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.267443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.267610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.267636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.267744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.267771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.267932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.267958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.268092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.268118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.268231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.268264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.268430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.268459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.268610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.268636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.268808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.268837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.268956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.268986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.269128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.269158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.269337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.269364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.318 qpair failed and we were unable to recover it. 00:25:48.318 [2024-07-24 19:55:05.269479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.318 [2024-07-24 19:55:05.269517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.269640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.269667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.269796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.269823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.269993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.270020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.270175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.270201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.270373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.270403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.270548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.270577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.270759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.270785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.270901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.270931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.271048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.271077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.271221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.271258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.271385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.271413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.271579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.271623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.271733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.271760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.271916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.271943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.272098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.272124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.272217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.272278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.272401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.272431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.272601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.272630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.272762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.272789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.272909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.272936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.273074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.273104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.273273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.273308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.273482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.273519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.273656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.273682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.273810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.273837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.274002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.274031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.274154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.274180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.274330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.274357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.274514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.274543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.274694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.274724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.274882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.274909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.275047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.275074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.275179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.275210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.275355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.275382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.275507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.275536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.275698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.275724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.275852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.275882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.319 [2024-07-24 19:55:05.276020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.319 [2024-07-24 19:55:05.276050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.319 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.276202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.276229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.276377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.276404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.276532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.276561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.276704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.276733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.276863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.276889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.277018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.277044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.277204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.277233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.277390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.277419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.277559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.277586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.277690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.277717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.277875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.277904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.278071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.278101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.278247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.278275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.278404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.278431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.278624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.278653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.278769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.278798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.278952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.278978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.279150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.279180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.279333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.279363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.279470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.279499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.279650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.279677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.279810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.279840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.280005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.280032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.280194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.280223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.280398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.280425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.280523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.280550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.280699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.280728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.280891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.280918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.281049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.281076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.281174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.281200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.281351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.281380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.281534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.281561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.281692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.281719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.281848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.281874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.282008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.282038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.282210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.282251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.282426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.282452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.320 [2024-07-24 19:55:05.282558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.320 [2024-07-24 19:55:05.282585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.320 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.282723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.282749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.282875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.282905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.283056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.283083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.283217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.283249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.283361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.283387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.283569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.283601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.283748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.283776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.283910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.283952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.284134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.284164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.284291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.284321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.284482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.284518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.284637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.284664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.284800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.284826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.284982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.285009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.285181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.285208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.285344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.285371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.285511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.285538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.285716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.285746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.285900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.285926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.286084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.286111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.286256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.286293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.286433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.286476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.286608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.286635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.286794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.286821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.286940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.286974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.287125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.287151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.287304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.287331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.287505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.287535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.287704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.287734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.287880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.287909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.288089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.288116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.288222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.288267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.288414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.288440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.288608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.288637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.288818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.288845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.288969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.289012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.289129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.289160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.321 [2024-07-24 19:55:05.289297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.321 [2024-07-24 19:55:05.289327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.321 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.289454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.289481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.289657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.289686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.289822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.289851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.289960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.289990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.290138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.290165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.290341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.290370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.290489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.290522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.290692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.290721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.290864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.290890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.291049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.291075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.291235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.291271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.291385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.291428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.291557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.291584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.291720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.291764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.291921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.291950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.292071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.292100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.292222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.292257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.292359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.292385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.292557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.292584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.292691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.292719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.292849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.292876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.292983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.293010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.293144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.293187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.293328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.293355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.293512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.293539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.293721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.293774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.293918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.293948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.294119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.294153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.294306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.294333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.294464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.294507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.294653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.294689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.294859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.294885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.295012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.295038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.322 qpair failed and we were unable to recover it. 00:25:48.322 [2024-07-24 19:55:05.295215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.322 [2024-07-24 19:55:05.295251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.295425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.295455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.295641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.295670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.295816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.295842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.295980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.296022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.296165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.296194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.296386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.296416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.296536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.296562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.296677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.296704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.296861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.296891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.297059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.297089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.297251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.297278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.297402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.297445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.297600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.297629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.297770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.297800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.297949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.297975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.298100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.298126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.298328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.298357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.298479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.298512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.298665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.298692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.298797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.298823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.298961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.298993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.299150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.299193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.299338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.299364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.299470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.299496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.299628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.299654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.299793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.299819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.299960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.299987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.300104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.300130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.300232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.300265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.300398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.300428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.300584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.300611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.300786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.300815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.300959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.300988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.301130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.301160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.301290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.301317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.301418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.323 [2024-07-24 19:55:05.301444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.323 qpair failed and we were unable to recover it. 00:25:48.323 [2024-07-24 19:55:05.301599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.301628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.301768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.301797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.301973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.301999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.302150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.302179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.302352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.302381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.302520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.302549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.302701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.302728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.302858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.302884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.303067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.303096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.303288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.303317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.303446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.303472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.303606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.303632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.303766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.303793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.303953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.303998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.304142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.304169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.304306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.304349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.304492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.304532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.304679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.304709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.304863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.304889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.305022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.305048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.305186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.305213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.305410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.305439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.305556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.305583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.305708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.305734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.305905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.305934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.306080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.306115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.306272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.306299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.306428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.306471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.306619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.306648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.306794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.306824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.306978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.307004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.307155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.307184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.307329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.307359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.307501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.307530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.307650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.307676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.307784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.307810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.307909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.307936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.308075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.308101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.308206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.308231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.308369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.308396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.308495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.308521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.324 qpair failed and we were unable to recover it. 00:25:48.324 [2024-07-24 19:55:05.308631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.324 [2024-07-24 19:55:05.308657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.308761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.308787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.308919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.308946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.309064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.309093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.309235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.309274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.309429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.309456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.309563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.309589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.309746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.309773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.309934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.309963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.310121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.310147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.310254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.310280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.310435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.310468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.310619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.310646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.310750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.310775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.310874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.310900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.311029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.311056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.311229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.311268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.311431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.311458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.311633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.311662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.311808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.311837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.312011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.312040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.312156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.312183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.312313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.312341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.312461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.312489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.312659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.312689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.312872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.312899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.313049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.313078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.313217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.313255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.313378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.313407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.313532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.313559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.313664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.313689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.313789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.313816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.313976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.314006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.314154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.314180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.314308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.314351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.314513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.314539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.314651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.314676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.314787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.314814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.314939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.314966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.315104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.315134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.325 [2024-07-24 19:55:05.315277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.325 [2024-07-24 19:55:05.315308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.325 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.315438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.315464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.315600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.315627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.315735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.315762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.315919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.315948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.316096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.316123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.316230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.316266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.316454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.316483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.316627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.316657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.316807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.316834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.316971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.316997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.317097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.317124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.317246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.317277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.317407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.317434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.317539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.317565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.317692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.317720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.317866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.317895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.318015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.318041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.318206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.318269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.318387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.318417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.318568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.318597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.318726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.318752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.318907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.318934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.319063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.319092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.319235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.319272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.319429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.319456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.319633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.319663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.319804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.319833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.319980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.320009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.320139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.320165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.320279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.320305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.320399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.320426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.320529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.320555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.320667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.320695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.320803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.320830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.320941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.320969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.321138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.321166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.321299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.321326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.321440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.321466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.321627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.321657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.321808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.321838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.326 qpair failed and we were unable to recover it. 00:25:48.326 [2024-07-24 19:55:05.322022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.326 [2024-07-24 19:55:05.322048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.322163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.322190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.322321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.322348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.322476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.322505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.322659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.322685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.322785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.322811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.322993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.323020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.323150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.323189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.323346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.323372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.323477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.323504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.323657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.323688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.323806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.323835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.323961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.323988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.324110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.324137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.324335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.324365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.324476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.324506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.324663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.324689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.324817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.324857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.324999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.325028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.325140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.325167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.325314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.325341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.325470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.325514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.325656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.325685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.325827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.325856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.326004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.326031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.326162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.326189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.326299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.326325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.326499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.326525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.326661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.326687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.326800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.326843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.326983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.327012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.327158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.327188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.327343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.327370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.327475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.327502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.327641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.327672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.327835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.327862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.328009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.328035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.328142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.328184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.328306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.327 [2024-07-24 19:55:05.328333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.327 qpair failed and we were unable to recover it. 00:25:48.327 [2024-07-24 19:55:05.328503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.328538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.328695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.328723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.328856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.328882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.329041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.329085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.329191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.329220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.329389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.329417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.329551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.329577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.329704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.329732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.329907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.329937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.330064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.330089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.330191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.330217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.330400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.330427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.330533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.330560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.330670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.330697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.330843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.330885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.331004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.331046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.331204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.331230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.331381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.331408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.331511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.331538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.331671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.331700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.331838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.331868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.331988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.332015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.332121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.332149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.332258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.332286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.332467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.332497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.332651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.332678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.332808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.332835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.332974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.333000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.333168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.333197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.333362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.333389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.333491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.333518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.333674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.333703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.333821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.333851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.333979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.334006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.334110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.334136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.334318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.334347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.334498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.334527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.334676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.328 [2024-07-24 19:55:05.334702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.328 qpair failed and we were unable to recover it. 00:25:48.328 [2024-07-24 19:55:05.334834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.334860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.334976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.335005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.335152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.335181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.335316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.335344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.335461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.335487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.335634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.335663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.335846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.335872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.335980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.336006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.336113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.336139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.336278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.336306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.336435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.336464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.336638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.336664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.336801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.336828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.336964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.336990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.337099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.337126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.337256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.337282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.337420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.337464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.337614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.337644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.337785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.337814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.337964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.337990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.338120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.338160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.338308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.338338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.338521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.338547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.338656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.338683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.338782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.338808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.338955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.338983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.339142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.339169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.339275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.339302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.339412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.339438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.339596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.339622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.339755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.339800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.339959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.339986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.340141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.340171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.340312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.340342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.340481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.340511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.340661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.340689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.340823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.340866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.341020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.341050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.341207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.341236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.341421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.341448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.341587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.341613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.341736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.341765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.329 [2024-07-24 19:55:05.341913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.329 [2024-07-24 19:55:05.341942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.329 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.342093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.342119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.342257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.342284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.342423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.342450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.342619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.342648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.342803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.342829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.342959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.342985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.343094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.343121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.343250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.343281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.343465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.343492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.343618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.343644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.343794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.343836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.343936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.343962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.344103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.344129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.344268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.344295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.344431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.344458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.344588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.344618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.344747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.344774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.344905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.344931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.345088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.345118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.345224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.345261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.345396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.345423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.345562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.345588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.345742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.345771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.345894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.345922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.346060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.346087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.346250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.346277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.346415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.346441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.346631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.346660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.346796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.346828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.346962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.330 [2024-07-24 19:55:05.346988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.330 qpair failed and we were unable to recover it. 00:25:48.330 [2024-07-24 19:55:05.347140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.347169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.347281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.347312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.347469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.347495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.347627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.347653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.347789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.347818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.347969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.347995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.348129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.348156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.348281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.348308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.348435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.348465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.348623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.348653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.348782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.348809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.348934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.348960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.349101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.349130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.349274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.349306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.349441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.349467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.349601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.349627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.349772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.349802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.349923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.349952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.350102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.350128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.350265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.350309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.350454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.350484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.350627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.350657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.350812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.350838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.350980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.351006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.351139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.351165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.351348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.351379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.331 qpair failed and we were unable to recover it. 00:25:48.331 [2024-07-24 19:55:05.351481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.331 [2024-07-24 19:55:05.351506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.351639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.351666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.351774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.351802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.351954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.351984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.352159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.352186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.352323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.352350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.352483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.352512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.352659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.352688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.352832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.352859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.353017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.353062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.353223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.353256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.353392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.353418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.353522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.353550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.353686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c3230 is same with the state(6) to be set 00:25:48.332 [2024-07-24 19:55:05.353903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.353947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.354109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.354140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.354275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.354303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.354434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.354461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.354618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.354648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.354800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.354827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.354927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.354971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.355088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.355120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.355278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.355307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.355440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.355467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.355626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.355655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.355785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.355813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.355949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.355975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.356138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.356167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.356319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.356347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.356457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.356484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.356620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.356647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.356766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.356794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.356906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.356933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.357062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.357088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.332 [2024-07-24 19:55:05.357201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.332 [2024-07-24 19:55:05.357230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.332 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.357428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.357473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.357609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.357640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.357794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.357821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.357927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.357955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.358094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.358124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.358279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.358312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.358483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.358515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.358663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.358692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.358846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.358872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.359003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.359030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.359186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.359215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.359354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.359382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.359487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.359516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.359690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.359720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.359850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.359877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.360009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.360046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.360218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.360276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.360431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.360458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.360560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.360586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.360771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.360801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.360972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.360998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.361093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.361135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.361294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.361321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.361426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.361452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.361613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.361655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.361764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.361794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.361949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.361975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.362114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.362160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.362307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.362338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.362495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.362521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.362630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.362656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.362768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.333 [2024-07-24 19:55:05.362795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.333 qpair failed and we were unable to recover it. 00:25:48.333 [2024-07-24 19:55:05.362922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.362953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.363061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.363088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.363254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.363284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.363441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.363468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.363601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.363627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.363780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.363809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.363938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.363964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.364093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.364120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.364253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.364284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.364405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.364433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.364544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.364571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.364736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.364763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.364872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.364898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.365007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.365033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.365147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.365174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.365311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.365338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.365437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.365463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.365645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.365674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.365825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.365851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.365982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.366008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.366114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.366140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.366282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.366309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.366410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.366451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.366643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.366669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.366779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.366805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.366943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.366969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.367082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.367108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.367302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.367329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.367437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.367463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.367574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.367608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.367746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.367772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.367931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.367977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.368126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.334 [2024-07-24 19:55:05.368156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.334 qpair failed and we were unable to recover it. 00:25:48.334 [2024-07-24 19:55:05.368305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.368332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.368441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.368467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.368592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.368622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.368805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.368832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.368970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.368997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.369131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.369157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.369263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.369288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.369396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.369422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.369528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.369561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.369696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.369724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.369829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.369856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.369966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.369993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.370122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.370149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.370294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.370334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.370505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.370533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.370703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.370730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.370864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.370908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.371056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.371086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.371238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.371274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.371383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.371428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.371576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.371606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.371737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.371763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.371876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.371904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.372034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.372061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.372214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.372257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.372462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.372489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.372596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.372622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.372718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.372744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.372846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.372873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.372998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.373027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.373162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.373190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.373331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.373359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.373518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.373549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.373771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.335 [2024-07-24 19:55:05.373798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.335 qpair failed and we were unable to recover it. 00:25:48.335 [2024-07-24 19:55:05.373921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.373951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.374126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.374161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.374299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.374328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.374550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.374581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.374734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.374765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.374952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.374979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.375119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.375154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.375294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.375322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.375458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.375485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.375624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.375651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.375771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.375801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.375934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.375961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.376094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.376120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.376285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.376316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.376494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.376521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.376636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.376680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.376824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.376855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.377073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.377101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.377260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.377290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.377404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.377434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.377552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.377579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.377683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.377709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.377846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.377873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.378009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.378036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.378168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.378214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.378356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.378384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.378501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.378528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.378665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.378692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.378829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.378856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.379009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.379036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.379188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.379219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.379377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.379404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.336 qpair failed and we were unable to recover it. 00:25:48.336 [2024-07-24 19:55:05.379509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.336 [2024-07-24 19:55:05.379536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.379662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.379689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.379842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.379871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.380033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.380060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.380194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.380239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.380396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.380443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.380607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.380634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.380765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.380791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.380920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.380947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.381058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.381090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.381255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.381282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.381456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.381486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.381664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.381692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.381789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.381815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.382003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.382032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.382164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.382192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.382326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.382354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.382470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.382500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.382658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.382685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.382795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.382822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.382979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.383009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.383142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.383168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.383348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.383378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.383491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.383521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.383673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.383701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.383832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.383860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.383961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.383988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.337 [2024-07-24 19:55:05.384116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.337 [2024-07-24 19:55:05.384143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.337 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.384265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.384310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.384489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.384516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.384680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.384707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.384852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.384882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.385003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.385033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.385185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.385212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.385356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.385384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.385515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.385546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.385701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.385728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.385860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.385887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.385993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.386020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.386192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.386222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.386363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.386390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.386502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.386529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.386640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.386667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.386795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.386822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.386955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.386985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.387110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.387137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.387271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.387314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.387426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.387456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.387584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.387611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.387712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.387743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.387901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.387931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.388056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.388082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.388219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.388254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.388414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.388443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.388566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.388593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.388726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.388753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.388916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.388946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.389068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.389096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.389249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.338 [2024-07-24 19:55:05.389294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.338 qpair failed and we were unable to recover it. 00:25:48.338 [2024-07-24 19:55:05.389446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.389476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.389632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.389660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.389768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.389795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.389924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.389955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.390115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.390143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.390237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.390269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.390426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.390455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.390606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.390633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.390773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.390800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.390907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.390934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.391110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.391139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.391290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.391318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.391448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.391475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.391606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.391632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.391733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.391760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.391921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.391951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.392130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.392156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.392269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.392314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.392463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.392492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.392649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.392676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.392783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.392810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.392964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.392994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.393152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.393181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.393295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.393322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.393456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.393484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.393618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.393646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.393779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.393805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.393943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.393969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.394072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.394100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.394287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.394317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.394500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.394532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.394640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.394667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.394829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.394856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.339 qpair failed and we were unable to recover it. 00:25:48.339 [2024-07-24 19:55:05.395047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.339 [2024-07-24 19:55:05.395077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.395214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.395248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.395408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.395438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.395582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.395612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.395771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.395798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.395934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.395961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.396132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.396161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.396339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.396366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.396501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.396528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.396639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.396666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.396794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.396820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.396938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.396965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.397102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.397130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.397339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.397366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.397515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.397545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.397721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.397751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.397877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.397905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.398047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.398074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.398209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.398239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.398399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.398425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.398535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.398562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.398716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.398746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.398895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.398922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.399054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.399097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.399261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.399306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.399463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.399491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.399627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.399670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.399854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.399881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.399988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.400014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.400146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.400172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.400417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.400445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.400575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.400602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.400703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.400730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.340 [2024-07-24 19:55:05.400893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.340 [2024-07-24 19:55:05.400923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.340 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.401044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.401070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.401207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.401234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.401390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.401434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.401558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.401589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.401703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.401729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.401856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.401883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.402019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.402046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.402148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.402176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.402321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.402353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.402516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.402543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.402755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.402782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.402930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.402960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.403174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.403200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.403335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.403366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.403518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.403548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.403677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.403705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.403836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.403880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.403994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.404024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.404180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.404207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.404347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.404375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.404517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.404548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.404702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.404731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.404867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.404910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.405031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.405062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.405225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.405261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.405383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.405410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.405564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.405591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.405698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.405726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.405844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.405882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.406037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.406066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.406296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.406322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.406456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.406481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.341 qpair failed and we were unable to recover it. 00:25:48.341 [2024-07-24 19:55:05.406611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.341 [2024-07-24 19:55:05.406636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.406843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.406868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.407021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.407049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.407164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.407191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.407350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.407378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.407485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.407529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.407651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.407681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.407835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.407863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.408079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.408108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.408279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.408309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.408442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.408468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.408601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.408632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.408766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.408795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.408929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.408956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.409095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.409121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.409222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.409256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.409388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.409414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.409550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.409594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.409742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.409772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.409908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.409934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.410039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.410068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.410239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.410278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.410412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.410438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.410599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.410642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.410789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.410819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.410951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.410978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.411112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.411139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.342 qpair failed and we were unable to recover it. 00:25:48.342 [2024-07-24 19:55:05.411297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.342 [2024-07-24 19:55:05.411327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.411449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.411476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.411634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.411676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.411864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.411898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.412030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.412059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.412165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.412191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.412341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.412371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.412507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.412534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.412640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.412666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.412768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.412794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.412930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.412956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.413093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.413119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.413236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.413272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.413403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.413430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.413551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.413579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.413760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.413790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.413949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.413975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.414106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.414132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.414340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.414368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.414474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.414501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.414685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.414714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.414991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.415045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.415232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.415292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.415400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.415425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.415587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.415624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.415779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.415806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.415942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.415985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.416113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.416143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.416266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.416294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.416400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.416425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.416613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.416642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.416794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.416820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.416934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.343 [2024-07-24 19:55:05.416960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.343 qpair failed and we were unable to recover it. 00:25:48.343 [2024-07-24 19:55:05.417119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.417149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.417282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.417310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.417445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.417472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.417630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.417674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.417832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.417859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.418015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.418044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.418197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.418229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.418394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.418421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.418563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.418608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.418761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.418790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.418922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.418948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.419058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.419084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.419213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.419247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.419369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.419396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.419514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.419541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.419684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.419711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.419847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.419875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.419987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.420014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.420168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.420208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.420337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.420366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.420533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.420560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.420748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.420816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.420941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.420969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.421075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.421102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.421266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.421303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.421446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.421473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.421626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.421657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.421869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.421924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.422091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.422120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.422237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.422286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.422396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.422421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.422521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.422552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.422692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.344 [2024-07-24 19:55:05.422735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.344 qpair failed and we were unable to recover it. 00:25:48.344 [2024-07-24 19:55:05.422919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.422945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.423053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.423079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.423192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.423218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.423370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.423399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.423528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.423555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.423668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.423694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.423801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.423827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.423957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.423982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.424078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.424104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.424213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.424239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.424359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.424386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.424515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.424541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.424656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.424683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.424818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.424844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.424946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.424972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.425072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.425115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.425265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.425291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.425423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.425449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.425609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.425637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.425801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.425827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.425934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.425962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.426081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.426110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.426235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.426266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.426372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.426400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.426529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.426558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.426688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.426715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.426841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.426867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.427030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.427056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.427164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.427190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.427305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.427331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.427439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.427466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.427568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.427595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.345 [2024-07-24 19:55:05.427722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.345 [2024-07-24 19:55:05.427748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.345 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.427902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.427930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.428062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.428105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.428211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.428239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.428392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.428418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.428522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.428549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.428651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.428680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.428835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.428863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.429009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.429035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.429177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.429202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.429384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.429411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.429547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.429573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.429675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.429701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.429830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.429859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.430006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.430033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.430147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.430174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.430311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.430340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.430494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.430520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.430629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.430654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.430827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.430853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.430987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.431013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.431128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.431154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.431322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.431352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.431480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.431507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.431633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.431659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.431781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.431810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.431937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.431963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.432063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.432089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.432222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.432286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.432403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.432431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.432571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.432598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.432730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.432760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.432916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.432943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.433065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.346 [2024-07-24 19:55:05.433121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.346 qpair failed and we were unable to recover it. 00:25:48.346 [2024-07-24 19:55:05.433252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.433283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.433414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.433441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.433598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.433625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.433736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.433762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.433893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.433921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.434023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.434050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.434216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.434250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.434379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.434406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.434534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.434560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.434688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.434717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.434900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.434926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.435095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.435123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.435298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.435329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.435489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.435514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.435633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.435659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.435767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.435793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.435898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.435924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.436024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.436050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.436179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.436208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.436380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.436411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.436578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.436623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.436785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.436817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.436972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.347 [2024-07-24 19:55:05.437000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.347 qpair failed and we were unable to recover it. 00:25:48.347 [2024-07-24 19:55:05.437111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.437138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.437268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.437314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.437473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.437500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.437612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.437639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.437784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.437811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.437951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.437978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.438158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.438187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.438343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.438371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.438479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.438506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.438612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.438639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.438791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.438820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.438969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.438996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.439131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.439172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.439356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.439384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.439545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.439571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.439673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.439700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.439893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.439941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.440066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.440093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.440250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.440278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.440394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.440421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.440524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.440550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.440712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.440754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.440875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.440905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.441072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.441099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.441218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.441270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.441423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.441450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.441577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.441603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.441751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.441778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.441917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.441961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.442096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.442128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.442263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.442291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.442439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.442479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.442616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.442644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.348 qpair failed and we were unable to recover it. 00:25:48.348 [2024-07-24 19:55:05.442786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.348 [2024-07-24 19:55:05.442832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.442977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.443006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.443160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.443187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.443304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.443332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.443468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.443495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.443621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.443647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.443781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.443829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.443989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.444039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.444259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.444302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.444456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.444485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.444637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.444667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.444849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.444876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.445022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.445049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.445173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.445229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.445376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.445404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.445526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.445552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.445754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.445781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.445920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.445947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.446077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.446121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.446290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.446331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.446474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.446503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.446609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.446636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.446795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.446821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.446974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.447007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.447188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.447217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.447359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.447388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.447506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.447532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.447664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.447691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.447845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.447874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.448029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.448055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.448195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.448222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.448400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.448429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.448567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.448595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.448699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.448741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.349 qpair failed and we were unable to recover it. 00:25:48.349 [2024-07-24 19:55:05.448870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.349 [2024-07-24 19:55:05.448900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.449109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.449139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.449322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.449349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.449462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.449491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.449595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.449622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.449731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.449758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.449906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.449958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.450125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.450153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.450281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.450322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.450436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.450464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.450566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.450593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.450729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.450773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.450887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.450916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.451072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.451099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.451261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.451293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.451444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.451470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.451578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.451610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.451708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.451735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.451883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.451929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.452056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.452084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.452190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.452217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.452381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.452408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.452511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.452538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.452655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.452683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.452844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.452873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.452989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.453015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.453144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.453170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.453304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.453352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.453469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.453496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.453604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.453631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.453767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.453797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.453919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.453946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.454049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.454075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.454227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.454264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.350 qpair failed and we were unable to recover it. 00:25:48.350 [2024-07-24 19:55:05.454387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.350 [2024-07-24 19:55:05.454414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.454520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.454547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.454647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.454674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.454844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.454870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.454995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.455024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.455163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.455192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.455320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.455346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.455476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.455502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.455654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.455684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.455837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.455869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.456005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.456049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.456172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.456201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.456338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.456365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.456477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.456504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.456610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.456637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.456762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.456788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.456925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.456952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.457098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.457127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.457293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.457320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.457433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.457459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.457566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.457592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.457693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.457718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.457822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.457848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.457972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.458002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.458139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.458165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.458300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.458328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.458433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.458459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.458579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.458606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.458723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.458749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.458860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.458887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.459024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.459051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.459218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.459268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.459427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.459455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.351 [2024-07-24 19:55:05.459567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.351 [2024-07-24 19:55:05.459594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.351 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.459706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.459733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.459867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.459896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.460023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.460050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.460194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.460221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.460375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.460402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.460538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.460565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.460672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.460698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.460854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.460900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.461019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.461045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.461204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.461270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.461406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.461435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.461564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.461591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.461727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.461753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.461901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.461930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.462124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.462151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.462270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.462300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.462433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.462459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.462595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.462622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.462758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.462785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.462923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.462952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.463154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.463183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.463330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.463370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.463519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.463546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.463678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.463704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.463837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.463879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.464050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.464076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.464203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.464230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.464359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.464387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.464497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.464543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.464698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.464729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.464839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.464866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.464973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.465000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.352 [2024-07-24 19:55:05.465133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.352 [2024-07-24 19:55:05.465159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.352 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.465291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.465319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.465436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.465477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.465592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.465621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.465783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.465810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.465969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.465996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.466132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.466159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.466299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.466328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.466453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.466481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.466630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.466656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.466769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.466795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.466913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.466940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.467043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.467071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.467179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.467206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.467387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.467416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.467528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.467555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.467692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.467719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.467828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.467855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.467978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.468004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.468113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.468139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.468317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.468344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.468462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.468489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.468629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.468655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.468768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.468796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.468936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.468968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.469101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.469128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.469305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.469332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.469461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.469487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.469627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.469653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.353 [2024-07-24 19:55:05.469792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.353 [2024-07-24 19:55:05.469820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.353 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.469993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.470021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.470127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.470154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.470313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.470340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.470486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.470513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.470660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.470691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.470805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.470835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.470964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.470990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.471122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.471148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.471305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.471361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.471482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.471510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.471620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.471647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.471780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.471809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.471961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.471987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.472088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.472115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.472273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.472316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.472424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.472451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.472587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.472614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.472824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.472851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.472988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.473014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.473116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.473142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.473275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.473302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.473429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.473460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.473635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.473664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.473815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.473845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.473968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.473994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.474107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.474133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.474260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.474305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.474444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.474470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.474615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.474641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.474769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.474801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.474940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.474967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.475085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.475112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.475235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.354 [2024-07-24 19:55:05.475285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.354 qpair failed and we were unable to recover it. 00:25:48.354 [2024-07-24 19:55:05.475439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.475466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.475597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.475641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.475797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.475856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.476006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.476032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.476148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.476174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.476278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.476305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.476407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.476434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.476548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.476575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.476719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.476748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.476900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.476926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.477027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.477053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.477179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.477208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.477355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.477382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.477521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.477562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.477756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.477787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.477977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.478005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.478165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.478195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.478332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.478360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.478462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.478489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.478598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.478625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.478754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.478782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.478912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.478939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.479052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.479107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.479255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.479313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.479447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.479475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.479655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.479685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.479843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.479876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.480029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.480077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.480239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.480301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.480410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.480437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.480575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.480602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.480709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.480736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.481012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.481063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.481216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.355 [2024-07-24 19:55:05.481249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.355 qpair failed and we were unable to recover it. 00:25:48.355 [2024-07-24 19:55:05.481415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.481444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.481574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.481600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.481729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.481755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.481863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.481891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.482095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.482141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.482265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.482292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.482394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.482420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.482579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.482605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.482770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.482798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.482934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.482960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.483090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.483117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.483255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.483282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.483426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.483453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.483588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.483625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.483753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.483780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.483938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.483981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.484093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.484122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.484275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.484302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.484439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.484465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.484623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.484652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.484773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.484798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.484931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.484958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.485148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.485178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.485298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.485325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.485450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.485476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.485602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.485628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.485775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.485802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.485908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.485951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.486094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.486123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.486283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.486310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.486418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.486444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.486576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.486607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.486762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.486788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.486918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.486960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.487078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.487108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.487287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.356 [2024-07-24 19:55:05.487318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.356 qpair failed and we were unable to recover it. 00:25:48.356 [2024-07-24 19:55:05.487450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.487477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.487603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.487633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.487766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.487793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.487893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.487920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.488055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.488085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.488203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.488229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.488337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.488364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.488501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.488544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.488664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.488692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.488826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.488852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.488982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.489011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.489164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.489190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.489349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.489377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.489531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.489560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.489708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.489734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.489835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.489863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.489987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.490017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.490165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.490192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.490321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.490348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.490460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.490487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.490653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.490680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.490808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.490850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.490999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.491029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.491252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.491298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.491423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.491449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.491612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.491639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.491800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.491830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.491951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.491981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.492129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.492159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.492315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.492342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.492444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.492470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.492618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.492647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.492779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.492806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.492910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.357 [2024-07-24 19:55:05.492936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.357 qpair failed and we were unable to recover it. 00:25:48.357 [2024-07-24 19:55:05.493089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.493116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.493258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.493284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.493411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.493438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.493559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.493589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.493732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.493758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.493897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.493924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.494111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.494155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.494296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.494325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.494472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.494499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.494620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.494649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.494807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.494835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.494965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.494992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.495112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.495138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.495249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.495276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.495384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.495410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.495545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.495571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.495675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.495702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.495837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.495863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.495995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.496021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.496154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.496185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.496296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.496323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.496423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.496450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.496595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.496634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.496783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.496824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.496966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.497035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.497183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.497215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.497370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.497397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.497508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.497537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.497672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.358 [2024-07-24 19:55:05.497699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.358 qpair failed and we were unable to recover it. 00:25:48.358 [2024-07-24 19:55:05.497849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.497880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.498028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.498063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.498217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.498255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.498419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.498461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.498616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.498644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.498797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.498845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.498967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.499011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.499116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.499143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.499255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.499296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.499437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.499465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.499577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.499606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.499743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.499770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.499882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.499910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.500044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.500070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.500177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.500204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.500337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.500378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.500548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.500591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.500816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.500863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.501081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.501135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.501256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.501301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.501441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.501468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.501591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.501646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.501873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.501923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.502114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.502140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.502267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.502294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.502396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.502423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.502581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.502608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.502740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.502766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.502876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.502902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.503096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.503152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.503326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.503355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.503517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.503562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.503805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.503857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.359 [2024-07-24 19:55:05.504144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.359 [2024-07-24 19:55:05.504194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.359 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.504333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.504360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.504490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.504534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.504682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.504726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.504859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.504885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.505024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.505052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.505184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.505210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.505337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.505383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.505532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.505577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.505736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.505780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.505910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.505937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.506100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.506126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.506257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.506284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.506413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.506456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.506613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.506643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.506791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.506818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.506981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.507007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.507168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.507195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.507318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.507368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.507491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.507521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.507696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.507729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.507883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.507936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.508162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.508213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.508386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.508413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.508601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.508636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.508750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.508792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.509058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.509087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.509236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.509271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.509436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.509462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.509639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.509669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.509842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.509873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.510126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.510179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.510344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.510373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.510555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.510585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.360 [2024-07-24 19:55:05.510689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.360 [2024-07-24 19:55:05.510719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.360 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.510843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.510885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.511006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.511035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.511181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.511211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.511398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.511438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.511582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.511610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.511728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.511771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.511923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.511966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.512096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.512123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.512252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.512280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.512407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.512434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.512592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.512618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.512754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.512781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.512912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.512941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.513077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.513104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.513238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.513271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.513407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.513433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.513544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.513577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.513688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.513714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.513864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.513909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.514068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.514113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.514216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.514248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.514391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.514417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.514536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.514579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.514704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.514730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.514868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.514894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.515053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.515079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.515203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.515230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.515363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.515391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.515547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.515574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.515701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.515728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.515895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.515922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.516055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.516081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.516219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.361 [2024-07-24 19:55:05.516251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.361 qpair failed and we were unable to recover it. 00:25:48.361 [2024-07-24 19:55:05.516362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.516390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.516507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.516552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.516701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.516745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.516893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.516940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.517074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.517102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.517238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.517271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.517395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.517439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.517597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.517623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.517760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.517804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.517913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.517941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.518076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.518107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.518221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.518259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.518398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.518425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.518560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.518587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.518719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.518745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.518919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.518949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.519100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.519127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.519259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.519286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.519402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.519431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.519540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.519570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.519687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.519716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.519855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.519884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.520031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.520060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.520208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.520258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.520363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.520390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.520526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.520552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.520739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.520768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.520933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.520959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.521100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.521128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.521266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.521307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.521443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.521470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.521620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.521651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.521769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.521799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.362 [2024-07-24 19:55:05.521935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.362 [2024-07-24 19:55:05.521965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.362 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.522118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.522148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.522308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.522335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.522457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.522486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.522602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.522630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.522784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.522814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.522943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.522988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.523160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.523189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.523320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.523348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.523455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.523481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.523617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.523658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.523775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.523804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.524014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.524043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.524180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.524209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.524327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.524354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.524490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.524516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.524666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.524696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.524838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.524867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.525020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.525050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.525176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.525203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.525346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.525373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.525503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.525529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.525690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.525733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.525879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.525908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.526025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.526067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.526206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.526235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.526373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.526400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.526502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.526528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.526666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.363 [2024-07-24 19:55:05.526707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.363 qpair failed and we were unable to recover it. 00:25:48.363 [2024-07-24 19:55:05.526846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.526875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.527017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.527049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.527163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.527192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.527363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.527391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.527498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.527525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.527685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.527711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.527842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.527871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.528083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.528112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.528282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.528309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.528410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.528436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.528588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.528618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.528791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.528820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.528963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.528992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.529116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.529146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.529306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.529334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.529468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.529496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.529655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.529685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.529789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.529831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.530002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.530032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.530182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.530208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.530350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.530376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.530480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.530506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.530678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.530707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.530861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.530887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.531048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.531078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.531225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.531257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.531353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.531380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.531518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.531544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.531693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.531722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.531872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.531901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.532056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.532085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.532276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.532316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.532488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.532516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.364 qpair failed and we were unable to recover it. 00:25:48.364 [2024-07-24 19:55:05.532674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.364 [2024-07-24 19:55:05.532719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.532870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.532911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.533075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.533103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.533264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.533290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.533445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.533490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.533667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.533709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.533867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.533911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.534074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.534102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.534208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.534235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.534368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.534395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.534531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.534562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.534746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.534775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.534928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.534956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.535083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.535110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.535238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.535275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.535446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.535473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.535622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.535651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.535792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.535821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.535935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.535964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.536147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.536177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.536313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.536340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.536520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.536564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.536723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.536767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.536952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.536995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.537138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.537165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.537301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.537327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.537507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.537536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.537712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.537754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.537872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.537902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.538054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.538080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.538246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.538274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.538399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.538443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.538558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.538588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.365 [2024-07-24 19:55:05.538752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.365 [2024-07-24 19:55:05.538797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.365 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.538903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.538930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.539058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.539085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.539252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.539279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.539411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.539444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.539608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.539635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.539767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.539793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.539951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.539977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.540131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.540158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.540332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.540376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.540527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.540571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.540700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.540726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.540859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.540885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.540991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.541019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.541181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.541208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.541374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.541401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.541549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.541578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.541689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.541716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.541854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.541881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.542007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.542032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.542164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.542190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.542301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.542327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.542505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.542534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.542680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.542710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.542864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.542894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.543099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.543143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.543316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.543342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.543496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.543526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.543703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.543732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.543906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.543959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.544060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.544085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.544217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.544252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.544357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.544385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.366 [2024-07-24 19:55:05.544523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.366 [2024-07-24 19:55:05.544552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.366 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.544771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.544826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.544970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.545000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.545138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.545167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.545329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.545356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.545491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.545534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.545673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.545701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.545873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.545903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.546052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.546082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.546222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.546257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.546440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.546466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.546599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.546625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.546824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.546853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.547027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.547087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.547264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.547291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.547416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.547442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.547577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.547604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.547779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.547808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.547947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.547977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.548166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.548195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.548384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.548411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.548564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.548593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.548769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.548825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.548994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.549023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.549139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.549167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.549319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.549350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.549458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.549484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.549609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.549635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.549781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.549810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.549955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.549984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.550094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.550123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.550261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.550304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.550433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.550459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.550596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.550625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.367 [2024-07-24 19:55:05.550827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.367 [2024-07-24 19:55:05.550856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.367 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.551031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.551060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.551213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.551250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.551398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.551425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.551562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.551588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.551755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.551784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.551957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.551987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.552137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.552166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.552321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.552348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.552507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.552534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.552637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.552680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.552830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.552859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.553031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.553060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.553208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.553236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.553392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.553418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.553525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.553550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.553652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.553678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.553811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.553840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.553973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.554017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.554173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.554202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.554373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.554400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.554533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.554560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.554657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.554682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.554865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.554895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.555111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.555141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.555305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.555332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.555466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.555492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.555655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.555681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.555805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.555831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.368 [2024-07-24 19:55:05.555980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.368 [2024-07-24 19:55:05.556009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.368 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.556212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.556240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.556404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.556435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.556630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.556660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.556799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.556829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.556941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.556970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.557126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.557152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.557311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.557338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.557473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.557500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.557605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.557631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.557754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.557783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.557930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.557959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.558082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.558108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.558240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.558272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.558410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.558437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.558565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.558595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.558803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.558832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.559017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.559046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.559189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.559218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.559391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.559418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.559576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.559603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.559767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.559797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.559948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.559974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.560106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.560149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.560298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.560325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.560452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.560478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.560609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.560651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.560800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.560829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.560953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.560979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.561113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.561139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.561299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.561333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.561456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.561483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.561621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.561648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.369 [2024-07-24 19:55:05.561805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.369 [2024-07-24 19:55:05.561831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.369 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.561973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.562000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.562108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.562152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.562321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.562351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.562507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.562534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.562636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.562663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.562848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.562877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.563034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.563061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.563164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.563190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.563338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.563365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.563479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.563505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.563638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.563676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.563816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.563842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.564000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.564027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.564168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.564198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.564350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.564380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.564529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.564556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.564687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.564714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.564851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.564881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.565034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.565061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.565161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.565187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.565318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.565349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.565503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.565529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.565656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.565700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.565821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.565850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.566015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.566041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.566182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.566226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.566348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.566378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.566536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.566562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.566689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.566715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.566855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.566881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.567028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.567054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.567208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.567234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.370 [2024-07-24 19:55:05.567362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.370 [2024-07-24 19:55:05.567388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.370 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.567496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.567523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.567652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.567678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.567810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.567839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.568000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.568026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.568183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.568231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.568417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.568443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.568569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.568596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.568771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.568800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.568912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.568942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.569069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.569094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.569223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.569255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.569419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.569448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.569623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.569650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.569777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.569804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.569941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.569968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.570100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.570126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.570237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.570270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.570379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.570406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.570540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.570566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.570703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.570729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.570903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.570933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.571094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.571120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.571278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.571308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.571456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.571487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.571637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.571663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.571795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.571821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.571980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.572009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.572190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.572217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.572335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.572362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.572469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.572496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.572625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.572652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.572760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.572791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.572913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.371 [2024-07-24 19:55:05.572942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.371 qpair failed and we were unable to recover it. 00:25:48.371 [2024-07-24 19:55:05.573106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.573132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.573254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.573280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.573398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.573424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.573559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.573585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.573737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.573765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.573914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.573944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.574096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.574125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.574311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.574338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.574440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.574466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.574575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.574602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.574713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.574737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.574837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.574863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.574980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.575007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.575183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.575212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.575398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.575428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.575610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.575637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.575764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.575808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.575954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.575982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.576155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.576181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.576290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.576335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.576487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.576516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.576642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.576666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.576799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.576853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.577011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.577051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.577231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.577265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.577421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.577450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.577620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.577648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.577760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.577787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.577925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.577966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.578111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.578140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.578267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.578295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.578435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.578462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.578594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.578621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.372 qpair failed and we were unable to recover it. 00:25:48.372 [2024-07-24 19:55:05.578750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.372 [2024-07-24 19:55:05.578776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.578951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.578980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.579104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.579133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.579293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.579320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.579459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.579485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.579646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.579674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.579829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.579859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.579992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.580019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.580172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.580201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.580343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.580370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.580535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.580562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.580700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.580729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.580864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.580890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.581048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.581074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.581228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.581280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.581385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.581412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.581551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.581577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.581715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.581744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.581878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.581905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.582016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.582043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.582189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.582219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.582365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.582392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.582493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.582519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.582647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.582673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.582830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.582857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.582953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.582979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.583113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.583140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.583265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.583293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.583401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.583428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.373 qpair failed and we were unable to recover it. 00:25:48.373 [2024-07-24 19:55:05.583592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.373 [2024-07-24 19:55:05.583621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.583741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.583768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.583878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.583905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.584061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.584090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.584253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.584284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.584423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.584450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.584578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.584607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.584767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.584794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.584931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.584958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.585075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.585105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.585263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.585291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.585456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.585483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.585615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.585642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.585818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.585844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.585969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.586013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.586120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.586149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.586297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.586323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.586473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.586502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.586687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.586717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.586868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.586895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.587022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.587063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.587204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.587233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.587399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.587425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.587531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.587557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.587687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.587713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.587838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.587864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.587967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.587993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.588120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.588149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.588304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.588331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.588464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.588508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.588627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.588656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.588812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.588838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.588959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.588986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.589150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.589179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.589335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.589362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.374 [2024-07-24 19:55:05.589491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.374 [2024-07-24 19:55:05.589517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.374 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.589647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.589676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.589824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.589850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.589988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.590014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.590118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.590145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.590252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.590280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.590393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.590419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.590515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.590541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.590678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.590704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.590832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.590858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.590951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.590984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.591090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.591117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.591261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.591288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.591456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.591483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.591617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.591643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.591777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.591803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.591939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.591969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.592132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.592158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.592294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.592339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.592485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.592515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.592642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.592669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.592825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.592868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.593016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.593045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.593207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.593233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.593356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.593383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.593513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.593543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.593672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.593698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.593856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.593882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.594009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.594039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.594239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.594276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.594441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.594468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.594643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.594672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.594822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.594849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.594949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.594976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.595106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.595135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.595289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.595316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.595446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.375 [2024-07-24 19:55:05.595472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.375 qpair failed and we were unable to recover it. 00:25:48.375 [2024-07-24 19:55:05.598353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.598388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.598525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.598553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.598664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.598690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.598823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.598851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.598988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.599014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.599122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.599148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.599287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.599317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.599448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.599474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.599610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.599636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.599762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.599792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.599916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.599943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.600073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.600099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.600221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.600263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.600385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.600411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.600541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.600568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.600748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.600777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.600918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.600945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.601074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.601101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.601253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.601283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.601465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.601491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.601665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.601694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.601868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.601898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.602074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.602101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.602306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.602335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.602515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.602542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.602672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.602699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.602824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.602866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.602983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.603012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.603182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.603209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.603329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.603357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.603484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.603511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.603710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.603737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.603877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.603907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.604052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.604081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.604239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.604273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.604408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.604435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.604594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.604620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.604826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.604852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.605007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.376 [2024-07-24 19:55:05.605050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.376 qpair failed and we were unable to recover it. 00:25:48.376 [2024-07-24 19:55:05.605186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.605215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.605398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.605425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.605524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.605571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.605734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.605761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.605892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.605920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.606027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.606054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.606240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.606277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.606423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.606449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.606581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.606608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.606744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.606771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.606903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.606929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.607088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.607115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.607255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.607282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.607427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.607454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.607577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.607603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.607787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.607817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.607955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.607981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.608137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.608164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.608344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.608375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.608500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.608526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.608633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.608659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.608787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.608816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.608973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.608999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.609147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.609178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.609344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.609372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.609530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.609556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.609708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.609737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.609879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.609908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.610054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.610080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.610238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.610287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.610400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.610430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.610577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.610605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.610777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.610806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.610957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.610987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.611146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.611172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.611285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.611312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.611441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.611468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.611595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.611621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.377 [2024-07-24 19:55:05.611781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.377 [2024-07-24 19:55:05.611807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.377 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.611932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.611958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.612061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.612087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.612253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.612298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.612445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.612475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.612604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.612630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.612768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.612795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.612908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.612937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.613092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.613118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.613276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.613318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.613431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.613461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.613617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.613643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.613798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.613839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.613957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.613987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.614173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.614200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.614305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.614332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.614465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.614492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.614695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.614722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.614854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.614882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.615086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.615113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.615273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.615300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.615429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.615472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.615598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.615628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.615779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.615805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.615934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.615961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.616125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.616154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.616316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.616344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.616453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.616480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.616637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.616666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.616843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.616870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.617013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.617043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.617184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.617213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.617358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.617389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.617499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.617525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.617680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.378 [2024-07-24 19:55:05.617709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.378 qpair failed and we were unable to recover it. 00:25:48.378 [2024-07-24 19:55:05.617840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.379 [2024-07-24 19:55:05.617866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.379 qpair failed and we were unable to recover it. 00:25:48.379 [2024-07-24 19:55:05.617971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.379 [2024-07-24 19:55:05.617997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.379 qpair failed and we were unable to recover it. 00:25:48.379 [2024-07-24 19:55:05.618143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.379 [2024-07-24 19:55:05.618172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.379 qpair failed and we were unable to recover it. 00:25:48.379 [2024-07-24 19:55:05.618326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.379 [2024-07-24 19:55:05.618354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.379 qpair failed and we were unable to recover it. 00:25:48.379 [2024-07-24 19:55:05.618486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.379 [2024-07-24 19:55:05.618528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.379 qpair failed and we were unable to recover it. 00:25:48.379 [2024-07-24 19:55:05.618646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.379 [2024-07-24 19:55:05.618675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.379 qpair failed and we were unable to recover it. 00:25:48.379 [2024-07-24 19:55:05.618785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.379 [2024-07-24 19:55:05.618828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.379 qpair failed and we were unable to recover it. 00:25:48.379 [2024-07-24 19:55:05.618970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.379 [2024-07-24 19:55:05.618996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.379 qpair failed and we were unable to recover it. 00:25:48.379 [2024-07-24 19:55:05.619102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.379 [2024-07-24 19:55:05.619128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.379 qpair failed and we were unable to recover it. 00:25:48.379 [2024-07-24 19:55:05.619257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.379 [2024-07-24 19:55:05.619284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.379 qpair failed and we were unable to recover it. 00:25:48.379 [2024-07-24 19:55:05.619417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.379 [2024-07-24 19:55:05.619443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.379 qpair failed and we were unable to recover it. 00:25:48.379 [2024-07-24 19:55:05.619561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.619588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.619717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.619744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.619847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.619874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.619974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.620000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.620160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.620186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.620323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.620350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.620452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.620480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.620595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.620622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.620757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.620799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.620924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.620954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.621127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.621153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.621305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.621335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.621464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.621493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.621623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.621649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.621812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.621856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.621979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.622009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.622158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.622185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.622292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.622319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.622477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.622506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.672 [2024-07-24 19:55:05.622666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.672 [2024-07-24 19:55:05.622692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.672 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.622828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.622854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.623003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.623032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.623250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.623295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.623422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.623448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.623574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.623603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.623779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.623805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.623932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.623976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.624089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.624119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.624269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.624296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.624427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.624454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.624555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.624582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.624683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.624710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.624843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.624870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.625043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.625069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.625170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.625197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.625337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.625365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.625472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.625499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.625595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.625621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.625753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.625779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.625963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.625993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.626123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.626150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.626256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.626284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.626409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.626439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.626596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.626623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.626751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.626777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.626908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.626935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.627041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.627068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.627198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.627224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.627359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.627386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.627491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.627518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.673 [2024-07-24 19:55:05.627653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.673 [2024-07-24 19:55:05.627679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.673 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.627806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.627835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.627998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.628024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.628156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.628183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.628326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.628368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.628501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.628528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.628635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.628661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.628819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.628846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.629003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.629029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.629125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.629151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.629310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.629339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.629463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.629489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.629646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.629673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.629768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.629794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.629921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.629947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.630079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.630122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.630228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.630273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.630398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.630425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.630562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.630589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.630707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.630736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.630868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.630895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.630988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.631014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.631167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.631196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.631383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.631409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.631564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.631593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.631730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.631759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.631907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.631933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.632071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.632096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.632198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.632224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.632394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.632421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.632593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.632623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.674 qpair failed and we were unable to recover it. 00:25:48.674 [2024-07-24 19:55:05.632770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.674 [2024-07-24 19:55:05.632800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.632959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.632987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.633134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.633161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.633279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.633306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.633439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.633465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.633592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.633619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.633761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.633790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.633917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.633943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.634078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.634104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.634238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.634277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.634380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.634408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.634512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.634539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.634675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.634706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.634862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.634889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.635025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.635055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.635263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.635290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.635451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.635478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.635655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.635684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.635798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.635828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.636006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.636032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.636183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.636213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.636386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.636413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.636541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.636567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.636742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.636771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.636940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.636970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.637121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.637147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.637270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.637315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.637429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.637458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.637576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.637602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.637709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.637736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.637886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.675 [2024-07-24 19:55:05.637916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.675 qpair failed and we were unable to recover it. 00:25:48.675 [2024-07-24 19:55:05.638032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.638058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.638215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.638263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.638417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.638446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.638590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.638617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.638751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.638778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.638937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.638966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.639146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.639173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.639310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.639337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.639437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.639465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.639622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.639648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.639831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.639865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.640009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.640038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.640190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.640216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.640385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.640412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.640590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.640620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.640741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.640769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.640871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.640898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.641049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.641078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.641211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.641238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.641401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.641428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.641616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.641645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.641803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.641829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.641967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.641993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.642120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.642147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.642266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.642292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.642463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.642493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.642631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.642661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.642821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.642847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.642955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.642982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.643134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.643163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.643310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.643338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.643497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.676 [2024-07-24 19:55:05.643524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.676 qpair failed and we were unable to recover it. 00:25:48.676 [2024-07-24 19:55:05.643625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.643651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.643754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.643780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.643888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.643914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.644062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.644088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.644250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.644276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.644412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.644439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.644573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.644599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.644748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.644775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.644938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.644965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.645122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.645151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.645309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.645336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.645461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.645487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.645590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.645616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.645747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.645773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.645908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.645952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.646060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.646089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.646221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.646261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.646424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.646451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.646599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.646628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.646781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.646811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.646939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.646982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.647116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.647145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.647298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.647325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.647429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.647455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.647663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.647690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.647803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.647829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.647969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.647994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.677 [2024-07-24 19:55:05.648128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.677 [2024-07-24 19:55:05.648154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.677 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.648290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.648317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.648443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.648486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.648601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.648630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.648783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.648809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.648962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.649007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.649181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.649210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.649355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.649382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.649505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.649531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.649712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.649742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.649887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.649914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.650062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.650090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.650232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.650268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.650445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.650471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.650606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.650632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.650769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.650795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.650989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.651015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.651160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.651189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.651335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.651365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.651491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.651521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.651677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.651703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.651846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.651872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.652034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.652060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.652189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.652216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.652380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.652410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.652542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.652569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.652699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.652725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.652877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.652906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.653080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.653106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.653257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.653287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.678 [2024-07-24 19:55:05.653449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.678 [2024-07-24 19:55:05.653479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.678 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.653629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.653655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.653790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.653816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.653983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.654012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.654159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.654185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.654317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.654344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.654499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.654528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.654684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.654710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.654844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.654870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.655031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.655058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.655156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.655182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.655317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.655343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.655445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.655472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.655628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.655654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.655804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.655833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.655972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.656006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.656154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.656180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.656336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.656366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.656551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.656577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.656707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.656734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.656891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.656917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.657035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.657064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.657287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.657314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.657453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.657480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.657670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.657699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.657818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.657844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.657976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.658002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.658142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.658171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.658327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.658353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.658509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.658536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.658721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.658755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.679 [2024-07-24 19:55:05.658879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.679 [2024-07-24 19:55:05.658905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.679 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.659039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.659081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.659190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.659219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.659374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.659401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.659551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.659581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.659750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.659779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.659903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.659930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.660063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.660089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.660227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.660263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.660412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.660438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.660544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.660570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.660757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.660786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.660910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.660936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.661071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.661097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.661255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.661285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.661413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.661439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.661595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.661621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.661768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.661798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.661943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.661970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.662073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.662099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.662211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.662238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.662392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.662419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.662514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.662539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.662671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.662700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.662854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.662880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.663007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.663033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.663166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.663198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.663338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.663365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.663492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.680 [2024-07-24 19:55:05.663518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.680 qpair failed and we were unable to recover it. 00:25:48.680 [2024-07-24 19:55:05.663618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.663646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.663751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.663778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.663918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.663960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.664113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.664142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.664294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.664321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.664478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.664504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.664688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.664716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.664901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.664928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.665041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.665089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.665240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.665276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.665459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.665486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.665616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.665658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.665834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.665864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.666011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.666037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.666164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.666206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.666348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.666379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.666554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.666580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.666755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.666784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.666953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.666983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.667188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.667217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.667409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.667436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.667544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.667571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.667702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.667729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.667837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.667864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.668018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.668048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.668232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.668268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.668422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.668451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.668589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.668618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.668773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.668799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.668935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.668962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.681 [2024-07-24 19:55:05.669116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.681 [2024-07-24 19:55:05.669146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.681 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.669302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.669329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.669465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.669492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.669607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.669634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.669736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.669762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.669871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.669897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.670005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.670032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.670166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.670194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.670298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.670329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.670441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.670468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.670628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.670654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.670755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.670799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.670982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.671008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.671143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.671169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.671329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.671356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.671468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.671495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.671666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.671695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.671807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.671835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.671996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.672025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.672150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.672188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.672333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.672361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.672520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.672549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.672670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.672696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.672832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.672858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.673039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.673068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.673251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.673277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.673426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.673455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.673626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.673655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.682 [2024-07-24 19:55:05.673781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.682 [2024-07-24 19:55:05.673808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.682 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.673913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.673939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.674046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.674073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.674221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.674271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.674427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.674457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.674616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.674643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.674812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.674838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.674964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.675006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.675130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.675161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.675307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.675334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.675468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.675494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.675621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.675648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.675750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.675776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.675878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.675904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.676065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.676094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.676254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.676281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.676387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.676414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.676544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.676572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.676706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.676732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.676861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.676906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.677039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.677068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.677230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.677264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.677398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.677425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.677584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.677615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.677766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.677792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.677928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.677971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.678098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.678127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.683 [2024-07-24 19:55:05.678276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.683 [2024-07-24 19:55:05.678304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.683 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.678429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.678472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.678627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.678657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.678803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.678830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.679013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.679042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.679190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.679219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.679360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.679387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.679544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.679586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.679741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.679771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.679922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.679949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.680107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.680134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.680297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.680328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.680459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.680485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.680614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.680641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.680800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.680843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.680998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.681024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.681140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.681167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.681264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.681289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.681417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.681443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.681577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.681603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.681737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.681763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.681895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.681926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.682084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.682110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.682272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.682300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.682456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.682482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.682638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.682681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.682809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.682838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.682962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.682989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.683123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.683151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.683314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.683344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.683482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.683509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.683647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.684 [2024-07-24 19:55:05.683674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.684 qpair failed and we were unable to recover it. 00:25:48.684 [2024-07-24 19:55:05.683824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.683853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.684011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.684037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.684169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.684197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.684336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.684366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.684494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.684521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.684657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.684683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.684845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.684874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.685022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.685048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.685222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.685256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.685404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.685434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.685587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.685614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.685712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.685739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.685893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.685922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.686068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.686095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.686226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.686266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.686403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.686446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.686628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.686654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.686836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.686865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.686976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.687005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.687212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.687250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.687403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.687429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.687563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.687589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.687763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.687789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.687967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.687996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.688139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.688168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.688304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.688331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.688490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.688516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.688661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.688690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.688851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.688878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.689008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.689035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.689193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.689227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.689394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.685 [2024-07-24 19:55:05.689420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.685 qpair failed and we were unable to recover it. 00:25:48.685 [2024-07-24 19:55:05.689520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.689547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.689653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.689683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.689805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.689832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.689943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.689970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.690119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.690149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.690331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.690358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.690492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.690519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.690616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.690642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.690753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.690779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.690883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.690909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.691060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.691090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.691239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.691272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.691412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.691457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.691598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.691628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.691768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.691794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.691967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.691997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.692167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.692196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.692322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.692347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.692457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.692482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.692600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.692629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.692777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.692804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.692911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.692937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.693104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.693134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.693289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.693316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.693476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.693520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.693667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.693701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.693838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.693866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.693979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.694007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.694189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.694216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.694330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.694357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.694516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.694560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.694752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.694780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.694954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.694981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.686 qpair failed and we were unable to recover it. 00:25:48.686 [2024-07-24 19:55:05.695156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.686 [2024-07-24 19:55:05.695186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.695361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.695392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.695519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.695546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.695678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.695704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.695900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.695929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.696088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.696114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.696226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.696277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.696428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.696458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.696587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.696614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.696782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.696826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.696980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.697009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.697156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.697182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.697290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.697317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.697416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.697447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.697546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.697573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.697699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.697726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.697883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.697912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.698054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.698080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.698266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.698296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.698441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.698470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.698634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.698661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.698765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.698792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.698945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.698971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.699129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.699158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.699316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.699343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.699476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.699503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.699699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.699725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.699886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.699912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.700039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.700066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.700252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.700279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.700427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.700458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.700565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.700594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.687 qpair failed and we were unable to recover it. 00:25:48.687 [2024-07-24 19:55:05.700746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.687 [2024-07-24 19:55:05.700772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.700903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.700935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.701126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.701156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.701285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.701313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.701458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.701500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.701619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.701648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.701800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.701827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.701955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.701982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.702168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.702198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.702334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.702362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.702493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.702519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.702634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.702661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.702796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.702822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.702929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.702956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.703066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.703093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.703227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.703260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.703409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.703437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.703587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.703617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.703770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.703798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.703972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.704002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.704144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.704174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.704331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.704359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.704495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.704521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.704628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.704654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.704778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.704805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.704940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.704982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.705124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.705153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.705286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.705313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.705456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.705487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.705646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.705677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.705830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.705856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.688 [2024-07-24 19:55:05.705990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.688 [2024-07-24 19:55:05.706033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.688 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.706156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.706186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.706309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.706336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.706441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.706468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.706593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.706623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.706778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.706805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.706916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.706943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.707069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.707099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.707338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.707365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.707491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.707518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.707617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.707643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.707791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.707832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.707988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.708033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.708168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.708195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.708327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.708355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.708461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.708487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.708677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.708721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.708851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.708896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.709077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.709107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.709257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.709284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.709401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.709448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.709626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.709669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.709812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.709855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.710008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.710039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.710221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.710290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.710393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.710419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.710546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.710575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.689 [2024-07-24 19:55:05.710722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.689 [2024-07-24 19:55:05.710752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.689 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.710869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.710898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.711094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.711138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.711252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.711279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.711413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.711440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.711587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.711631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.711812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.711855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.711983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.712012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.712132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.712159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.712322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.712348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.712506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.712549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.712784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.712839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.712984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.713014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.713161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.713190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.713352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.713380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.713552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.713580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.713722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.713751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.713920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.713973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.714156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.714185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.714360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.714387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.714482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.714509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.714616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.714642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.714800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.714829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.715037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.715067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.715204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.715233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.715397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.715423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.715550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.715576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.715681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.715707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.715832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.715859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.716013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.716041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.716181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.716210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.716367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.716394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.690 [2024-07-24 19:55:05.716519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.690 [2024-07-24 19:55:05.716545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.690 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.716729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.716758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.716900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.716929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.717073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.717101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.717258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.717285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.717442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.717468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.717577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.717603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.717729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.717755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.717908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.717936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.718142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.718171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.718333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.718360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.718494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.718538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.718684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.718715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.718861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.718890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.719058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.719087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.719238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.719273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.719406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.719434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.719613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.719642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.719799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.719828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.719965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.719994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.720136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.720162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.720297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.720324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.720456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.720483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.720617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.720647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.720845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.720874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.720992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.721021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.721160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.721189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.721350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.721378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.721488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.721514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.721640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.721667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.721781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.691 [2024-07-24 19:55:05.721810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.691 qpair failed and we were unable to recover it. 00:25:48.691 [2024-07-24 19:55:05.721982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.722012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.722154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.722184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.722341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.722373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.722480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.722507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.722620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.722646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.722780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.722806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.722904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.722930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.723067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.723098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.723247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.723274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.723380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.723407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.723517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.723543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.723636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.723661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.723764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.723791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.723894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.723938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.724120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.724149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.724316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.724352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.724473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.724500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.724692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.724721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.724835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.724864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.724979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.725008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.725185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.725214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.725347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.725373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.725485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.725527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.725646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.725675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.725816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.725846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.725971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.725998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.726191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.726217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.726341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.726368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.726494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.726521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.726624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.726651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.726766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.726793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.726984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.692 [2024-07-24 19:55:05.727013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.692 qpair failed and we were unable to recover it. 00:25:48.692 [2024-07-24 19:55:05.727135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.727161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.727317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.727344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.727481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.727507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.727642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.727669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.727800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.727827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.728002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.728029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.728130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.728157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.728294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.728322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.728452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.728481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.728595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.728621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.728779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.728806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.728953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.728979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.729084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.729110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.729252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.729295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.729408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.729438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.729557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.729584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.729698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.729724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.729842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.729873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.730028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.730054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.730209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.730270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.730450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.730480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.730609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.730635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.730793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.730820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.730939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.730967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.731175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.731204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.731338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.731366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.731504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.731548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.731676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.731702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.693 qpair failed and we were unable to recover it. 00:25:48.693 [2024-07-24 19:55:05.731856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.693 [2024-07-24 19:55:05.731882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.732039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.732081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.732198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.732226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.732395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.732439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.732587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.732616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.732798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.732825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.732925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.732967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.733083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.733123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.733310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.733336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.733445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.733488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.733667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.733704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.733883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.733909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.734087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.734116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.734304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.734330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.734442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.734468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.734647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.734675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.734821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.734850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.735019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.735046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.735177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.735221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.735373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.735402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.735529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.735555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.735688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.735714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.735864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.735892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.736081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.736107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.736239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.736287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.736409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.736437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.736565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.736591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.736701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.736727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.736888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.736915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.694 qpair failed and we were unable to recover it. 00:25:48.694 [2024-07-24 19:55:05.737071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.694 [2024-07-24 19:55:05.737097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.737271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.737300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.737413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.737442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.737580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.737605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.737743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.737769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.737951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.737981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.738106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.738131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.738284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.738327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.738474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.738502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.738661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.738688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.738797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.738823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.738977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.739003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.739174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.739200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.739376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.739403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.739516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.739543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.739703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.739728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.739873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.739899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.740070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.740100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.740253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.740280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.740405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.740447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.740591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.740622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.740799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.740826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.740987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.741017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.741180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.741223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.741396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.741423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.741578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.741621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.741789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.741818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.741974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.695 [2024-07-24 19:55:05.742001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.695 qpair failed and we were unable to recover it. 00:25:48.695 [2024-07-24 19:55:05.742105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.742131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.742287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.742317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.742468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.742494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.742632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.742674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.742790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.742818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.742966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.742991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.743125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.743166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.743285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.743314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.743474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.743501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.743625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.743668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.743809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.743837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.743958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.743985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.744119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.744145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.744282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.744309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.744448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.744475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.744573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.744599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.744784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.744812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.744939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.744965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.745093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.745119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.745270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.745299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.745458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.745485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.745621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.745651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.745783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.745808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.746001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.746028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.746125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.746151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.746290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.746320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.746493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.746520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.746646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.746672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.746806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.746832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.747005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.696 [2024-07-24 19:55:05.747031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.696 qpair failed and we were unable to recover it. 00:25:48.696 [2024-07-24 19:55:05.747129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.747155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.747303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.747333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.747483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.747509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.747696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.747724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.747835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.747863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.748026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.748053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.748211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.748237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.748424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.748453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.748640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.748667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.748769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.748795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.748931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.748961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.749122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.749152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.749313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.749351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.749551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.749581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.749723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.749750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.749889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.749933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.750106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.750135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.750292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.750319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.750450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.750493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.750646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.750675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.750805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.750831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.750936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.750962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.751143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.751172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.751325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.751355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.751511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.751538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.751688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.751717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.751893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.751919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.752075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.752105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.752278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.752308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.752460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.752487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.697 [2024-07-24 19:55:05.752617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.697 [2024-07-24 19:55:05.752660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.697 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.752807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.752837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.752985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.753015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.753175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.753202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.753366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.753392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.753495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.753521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.753657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.753683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.753786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.753814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.753917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.753944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.754043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.754069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.754206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.754233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.754411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.754437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.754614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.754643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.754787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.754816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.754970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.754997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.755130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.755172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.755348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.755379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.755527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.755554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.755669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.755695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.755879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.755909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.756043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.756071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.756200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.756227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.756361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.756391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.756511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.756537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.756668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.756694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.756820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.756849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.756980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.757007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.757108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.757135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.757292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.757322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.757469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.757499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.757638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.757682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.757855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.757884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.758096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.698 [2024-07-24 19:55:05.758125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.698 qpair failed and we were unable to recover it. 00:25:48.698 [2024-07-24 19:55:05.758305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.758333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.758460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.758486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.758586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.758612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.758740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.758775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.758895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.758923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.759108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.759135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.759278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.759308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.759446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.759474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.759596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.759632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.759733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.759757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.759903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.759938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.760099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.760124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.760239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.760271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.760368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.760393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.760526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.760553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.760678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.760704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.760804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.760830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.760955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.760981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.761116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.761142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.761300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.761331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.761483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.761511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.761636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.761662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.761791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.761821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.761971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.761997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.762116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.762142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.762295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.762325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.762452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.762477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.762641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.762668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.762833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.762862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.763017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.763042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.763167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.763210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.763400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.763430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.699 [2024-07-24 19:55:05.763615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.699 [2024-07-24 19:55:05.763642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.699 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.763791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.763819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.763964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.763992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.764139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.764167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.764306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.764349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.764455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.764489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.764644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.764670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.764843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.764872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.765047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.765076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.765235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.765271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.765443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.765469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.765597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.765626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.765777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.765803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.765931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.765957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.766081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.766107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.766256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.766300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.766398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.766423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.766579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.766608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.766760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.766786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.766922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.766948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.767107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.767136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.767291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.767318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.767479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.767505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.767667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.767696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.767825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.700 [2024-07-24 19:55:05.767851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.700 qpair failed and we were unable to recover it. 00:25:48.700 [2024-07-24 19:55:05.767978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.768004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.768169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.768198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.768339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.768366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.768524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.768550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.768706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.768734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.768890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.768916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.769019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.769044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.769179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.769215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.769356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.769382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.769511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.769537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.769672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.769701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.769862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.769889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.770027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.770053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.770156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.770182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.770311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.770338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.770467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.770493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.770642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.770672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.770852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.770879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.771012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.771039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.771179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.771205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.771312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.771338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.771447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.771473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.771641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.771670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.771827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.771853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.771989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.772015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.772139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.772167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.772294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.772321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.772429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.772455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.772600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.772629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.772812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.701 [2024-07-24 19:55:05.772838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.701 qpair failed and we were unable to recover it. 00:25:48.701 [2024-07-24 19:55:05.772985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.773013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.773158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.773186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.773368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.773395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.773567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.773596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.773740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.773769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.773968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.773993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.774151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.774180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.774325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.774355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.774539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.774565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.774673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.774714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.774862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.774891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.775007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.775033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.775166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.775192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.775391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.775421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.775544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.775570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.775708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.775733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.775862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.775888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.776016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.776042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.776169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.776217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.776412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.776443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.776625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.776652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.776814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.776839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.776967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.776993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.777222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.777258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.777411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.777437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.777582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.777611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.777797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.777824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.777975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.778004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.778153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.778181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.778361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.778388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.778520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.778545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.778670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.778696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.702 [2024-07-24 19:55:05.778832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.702 [2024-07-24 19:55:05.778857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.702 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.778992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.779035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.779195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.779221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.779386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.779411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.779547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.779574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.779705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.779735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.779882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.779908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.780039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.780066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.780254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.780284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.780440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.780466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.780604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.780631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.780791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.780821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.781003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.781029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.781164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.781190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.781330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.781358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.781529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.781556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.781683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.781709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.781838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.781866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.782041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.782067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.782191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.782233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.782426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.782452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.782580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.782606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.782786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.782815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.782958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.782988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.783156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.783183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.783317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.783344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.783479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.783505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.783656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.783683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.783819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.783846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.783954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.783981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.784115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.784143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.784302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.784329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.703 [2024-07-24 19:55:05.784462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.703 [2024-07-24 19:55:05.784489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.703 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.784654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.784680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.784816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.784843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.784950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.784976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.785105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.785131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.785303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.785332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.785470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.785499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.785628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.785655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.785755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.785782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.785921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.785947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.786078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.786103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.786237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.786270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.786462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.786488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.786598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.786624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.786756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.786782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.786918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.786946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.787076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.787101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.787206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.787232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.787390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.787421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.787543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.787570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.787708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.787734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.787833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.787859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.788009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.788041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.788227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.788286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.788409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.788437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.788565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.788591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.788695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.788720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.788851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.788877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.789035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.789061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.789163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.789189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.789293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.704 [2024-07-24 19:55:05.789320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.704 qpair failed and we were unable to recover it. 00:25:48.704 [2024-07-24 19:55:05.789456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.789481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.789609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.789651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.789795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.789824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.789973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.789998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.790125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.790151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.790324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.790352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.790486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.790512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.790647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.790673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.790778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.790804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.790965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.790991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.791142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.791171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.791339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.791369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.791504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.791530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.791663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.791688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.791851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.791879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.792025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.792050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.792219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.792255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.792410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.792436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.792559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.792585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.792723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.792767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.792881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.792910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.793061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.793087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.793266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.793296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.793443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.793472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.793613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.793638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.793770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.793795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.793984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.794014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.794170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.794197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.794335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.794361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.794466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.794493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.794672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.705 [2024-07-24 19:55:05.794698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.705 qpair failed and we were unable to recover it. 00:25:48.705 [2024-07-24 19:55:05.794824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.794850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.794990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.795024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.795206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.795232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.795342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.795368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.795479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.795531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.795686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.795712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.795848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.795874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.796010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.796036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.796184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.796209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.796369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.796396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.796553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.796582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.796771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.796797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.796907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.796932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.797088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.797118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.797296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.797323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.797488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.797514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.797669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.797697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.797830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.797856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.797982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.798007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.798164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.798193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.798351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.798378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.798516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.798541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.798710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.798738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.798841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.798882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.799014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.799039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.799182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.799210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.799377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.706 [2024-07-24 19:55:05.799403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.706 qpair failed and we were unable to recover it. 00:25:48.706 [2024-07-24 19:55:05.799538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.799563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.799753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.799783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.799921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.799948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.800081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.800107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.800206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.800231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.800372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.800399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.800509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.800537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.800697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.800727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.800910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.800937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.801064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.801090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.801278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.801309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.801463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.801490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.801659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.801687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.801832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.801860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.801982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.802008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.802174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.802216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.802358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.802388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.802543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.802570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.802710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.802753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.802926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.802955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.803108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.803134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.803297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.803340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.803456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.803487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.803641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.803667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.803801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.803828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.803993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.804022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.804195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.804221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.804355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.804381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.804515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.804541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.804685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.804710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.804821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.804847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.804955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.804980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.707 [2024-07-24 19:55:05.805133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.707 [2024-07-24 19:55:05.805161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.707 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.805302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.805329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.805439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.805465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.805584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.805610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.805711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.805736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.805891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.805920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.806073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.806099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.806210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.806235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.806354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.806380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.806483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.806509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.806664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.806712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.806890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.806918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.807054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.807080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.807206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.807231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.807374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.807401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.807535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.807560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.807664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.807690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.807795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.807821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.807929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.807955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.808096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.808121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.808263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.808292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.808428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.808455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.808611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.808637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.808772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.808800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.808977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.809004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.809116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.809142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.809319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.809349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.809513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.809539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.809666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.809709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.809830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.809870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.810023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.810050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.810154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.810180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.708 [2024-07-24 19:55:05.810344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.708 [2024-07-24 19:55:05.810374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.708 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.810496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.810528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.810639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.810665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.810776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.810802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.810929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.810955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.811057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.811086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.811223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.811257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.811442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.811468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.811568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.811594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.811803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.811832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.811955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.811981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.812132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.812160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.812343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.812370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.812503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.812530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.812625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.812654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.812788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.812814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.812947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.812972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.813132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.813157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.813309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.813339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.813469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.813495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.813612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.813637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.813786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.813816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.813949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.813975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.814097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.814124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.814250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.814280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.814435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.814461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.814591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.814617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.814722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.814748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.814885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.814911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.709 qpair failed and we were unable to recover it. 00:25:48.709 [2024-07-24 19:55:05.815012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.709 [2024-07-24 19:55:05.815037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.815178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.815204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.815351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.815390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.815511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.815539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.815712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.815757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.815886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.815934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.816062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.816091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.816212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.816257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.816365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.816392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.816524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.816558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.816675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.816719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.816856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.816890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.817038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.817067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.817219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.817261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.817413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.817456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.817619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.817646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.817773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.817816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.818035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.818076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.818201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.818228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.818398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.818443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.818591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.818643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.818827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.818872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.818985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.819011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.819114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.819141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.819297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.819328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.819490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.819520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.819709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.819741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.819905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.819931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.820070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.820096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.820229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.820265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.710 qpair failed and we were unable to recover it. 00:25:48.710 [2024-07-24 19:55:05.820403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.710 [2024-07-24 19:55:05.820437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.820573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.820602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.820743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.820774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.820929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.820955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.821079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.821106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.821220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.821255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.821391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.821419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.821535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.821574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.821679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.821705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.821838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.821864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.821998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.822025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.822135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.822164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.822282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.822309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.822406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.822432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.822576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.822601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.822732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.822757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.822891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.822917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.823036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.823082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.823219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.823252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.823376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.823403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.823525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.823577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.823710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.823753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.823909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.823954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.824070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.824106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.824246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.824274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.824375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.824401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.824502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.824541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.824673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.824705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.824836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.824863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.824984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.825012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.711 qpair failed and we were unable to recover it. 00:25:48.711 [2024-07-24 19:55:05.825127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.711 [2024-07-24 19:55:05.825153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.825266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.825292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.825412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.825438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.825584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.825610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.825789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.825815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.825922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.825951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.826084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.826111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.826247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.826274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.826408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.826435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.826569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.826595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.826717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.826761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.826906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.826937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.827049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.827077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.827225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.827262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.827429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.827459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.827581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.827610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.827754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.827783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.827896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.827925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.828054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.828080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.828191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.828218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.828346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.828375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.828528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.828575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.828707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.828752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.828908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.828951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.829070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.829111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.829214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.829247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.829378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.829409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.829529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.712 [2024-07-24 19:55:05.829570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.712 qpair failed and we were unable to recover it. 00:25:48.712 [2024-07-24 19:55:05.829710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.829738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.829882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.829911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.830055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.830084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.830225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.830261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.830392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.830417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.830563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.830590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.830707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.830733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.830864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.830892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.831003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.831032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.831169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.831198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.831399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.831427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.831550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.831594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.831782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.831829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.831960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.832006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.832140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.832167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.832321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.832366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.832519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.832584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.832753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.832798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.832892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.832918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.833031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.833059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.833163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.833188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.833323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.833352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.833469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.833498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.833614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.833649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.833792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.833822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.833934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.833963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.834125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.834154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.834287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.834319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.834441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.834471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.834648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.834677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.834837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.834866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.835065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.835111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.713 qpair failed and we were unable to recover it. 00:25:48.713 [2024-07-24 19:55:05.835217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.713 [2024-07-24 19:55:05.835250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.835404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.835430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.835568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.835598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.835782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.835809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.835930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.835960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.836118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.836145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.836250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.836285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.836403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.836430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.836562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.836589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.836761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.836805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.836916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.836942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.837050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.837077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.837210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.837236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.837359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.837386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.837524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.837570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.837732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.837775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.837878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.837905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.838074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.838101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.838257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.838321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.838451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.838481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.838597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.838626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.838802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.838831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.838944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.838973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.839131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.839157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.839299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.839326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.839447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.839476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.839588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.839617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.839731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.839760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.839930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.839959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.840073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.840103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.714 [2024-07-24 19:55:05.840229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.714 [2024-07-24 19:55:05.840263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.714 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.840407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.840457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.840608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.840652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.840806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.840835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.840973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.841017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.841161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.841188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.841317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.841362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.841497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.841527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.841709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.841753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.841904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.841948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.842070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.842096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.842316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.842359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.842495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.842549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.842680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.842710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.842824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.842853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.842981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.843011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.843125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.843155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.843304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.843332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.843485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.843514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.843637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.843666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.843779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.843808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.843919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.843947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.844073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.844102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.844232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.844267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.844434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.844461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.844596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.844643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.844803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.844847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.844976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.845003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.845107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.845134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.845237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.845269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.715 [2024-07-24 19:55:05.845373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.715 [2024-07-24 19:55:05.845400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.715 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.845499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.845525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.845629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.845654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.845815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.845842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.845979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.846006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.846107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.846135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.846324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.846369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.846491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.846521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.846672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.846701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.846839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.846889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.847004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.847034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.847195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.847239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.847416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.847446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.847557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.847587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.847741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.847770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.847910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.847938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.848076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.848105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.848230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.848286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.848424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.848453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.848684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.848713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.848942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.848972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.849145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.849174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.849304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.849331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.849429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.849455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.849598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.849624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.849784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.849814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.850036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.850066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.850214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.850258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.850407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.850433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.850539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.850565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.850740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.850771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.850933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.716 [2024-07-24 19:55:05.850963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.716 qpair failed and we were unable to recover it. 00:25:48.716 [2024-07-24 19:55:05.851082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.851111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.851262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.851301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.851406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.851433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.851569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.851596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.851854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.851884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.852053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.852083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.852323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.852350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.852570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.852597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.852745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.852774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.852918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.852947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.853070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.853112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.853235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.853272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.853406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.853432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.853563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.853589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.853724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.853751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.853868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.853894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.854024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.854054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.854173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.854203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.854373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.854400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.854543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.854569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.854698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.854733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.854893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.854922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.855073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.855102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.855212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.855249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.855372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.717 [2024-07-24 19:55:05.855398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.717 qpair failed and we were unable to recover it. 00:25:48.717 [2024-07-24 19:55:05.855499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.855526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.855687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.855716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.855870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.855899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.856080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.856109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.856233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.856284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.856426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.856452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.856571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.856599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.856727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.856753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.856883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.856911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.857038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.857066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.857208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.857236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.857360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.857386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.857495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.857521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.857661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.857686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.857819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.857844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.857996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.858027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.858169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.858197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.858400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.858427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.858588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.858613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.858764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.858791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.858940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.858968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.859105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.859132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.859320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.859346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.859460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.859487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.859625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.859652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.859764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.859790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.859929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.859958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.860082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.860108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.860214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.860249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.860359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.860384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.860539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.718 [2024-07-24 19:55:05.860566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.718 qpair failed and we were unable to recover it. 00:25:48.718 [2024-07-24 19:55:05.860679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.860708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.860828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.860856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.860999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.861028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.861163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.861191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.861295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.861337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.861461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.861501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.861683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.861728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.861906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.861949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.862085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.862112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.862234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.862268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.862387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.862431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.862582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.862628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.862783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.862826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.862928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.862963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.863110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.863138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.863256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.863284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.863397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.863424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.863567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.863594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.863705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.863737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.863842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.863868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.863975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.864003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.864141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.864166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.864309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.864336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.864441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.864468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.864570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.864596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.864707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.864733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.864843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.864869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.864980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.865007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.865123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.865149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.865315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.865345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.865480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.865512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.719 [2024-07-24 19:55:05.865642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.719 [2024-07-24 19:55:05.865669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.719 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.865808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.865835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.865945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.865972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.866107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.866133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.866301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.866328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.866440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.866466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.866607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.866633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.866751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.866777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.866944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.866970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.867084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.867110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.867212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.867239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.867378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.867405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.867566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.867592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.867701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.867727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.867890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.867921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.868061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.868088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.868226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.868262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.868378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.868403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.868504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.868530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.868662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.868688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.868789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.868815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.868916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.868942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.869051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.869076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.869189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.869216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.869353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.869380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.869508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.869534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.869644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.869670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.869803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.869829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.869976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.870005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.870104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-07-24 19:55:05.870131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.720 qpair failed and we were unable to recover it. 00:25:48.720 [2024-07-24 19:55:05.870266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.870300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.870441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.870467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.870600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.870627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.870729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.870755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.870869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.870898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.870998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.871024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.871140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.871165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.871313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.871340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.871442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.871467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.871634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.871661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.871792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.871820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.871947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.871978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.872080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.872107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.872251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.872290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.872434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.872461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.872603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.872642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.872787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.872815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.872953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.872982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.873116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.873143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.873252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.873292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.873405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.873433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.873558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.873584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.873695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.873721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.873822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.873849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.874009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.874035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.874202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.874262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.874397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.874426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.874537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.874563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.874693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.874720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.874827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-07-24 19:55:05.874852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.721 qpair failed and we were unable to recover it. 00:25:48.721 [2024-07-24 19:55:05.875011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.875037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.875138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.875165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.875277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.875304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.875463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.875490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.875619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.875644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.875817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.875845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.875975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.876002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.876139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.876165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.876326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.876359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.876476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.876506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.876614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.876640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.876740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.876765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.876882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.876908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.877040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.877065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.877201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.877226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.877353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.877379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.877515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.877540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.877643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.877670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.877814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.877839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.877949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.877974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.878077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.878103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.878206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.878231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.878371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.878399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.878511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.878537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.878672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.878698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.878825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.878850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.878979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.879006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.879154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.879195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.879357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.879397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.879544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.879573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.879710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.879737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.879867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.879894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.880005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-07-24 19:55:05.880031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.722 qpair failed and we were unable to recover it. 00:25:48.722 [2024-07-24 19:55:05.880133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.880159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.880288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.880329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.880488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.880522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.880663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.880690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.880807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.880834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.880939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.880966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.881104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.881133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.881248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.881286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.881389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.881415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.881532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.881558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.881691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.881717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.881850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.881876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.882009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.882035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.882211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.882259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.882408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.882436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.882553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.882579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.882687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.882713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.882874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.882900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.883056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.883082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.883218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.883251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.883409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.883435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.883581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.883607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.883742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.883769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.883914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.883940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.884050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.884077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.723 qpair failed and we were unable to recover it. 00:25:48.723 [2024-07-24 19:55:05.884205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.723 [2024-07-24 19:55:05.884252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.884379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.884407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.884535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.884561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.884721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.884748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.884882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.884914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.885051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.885076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.885205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.885231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.885385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.885411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.885525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.885550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.885662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.885687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.885796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.885822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.885958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.885985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.886101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.886126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.886292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.886319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.886429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.886455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.886565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.886591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.886684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.886710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.886849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.886875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.886986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.887011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.887115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.887140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.887249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.887287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.887392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.887418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.887542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.887567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.887670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.887695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.887823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.887847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.887992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.888019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.888152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.888178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.888316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.888342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.888451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.888478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.888577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.888603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.888713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.888739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.888872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.888902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.889009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.889035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.724 qpair failed and we were unable to recover it. 00:25:48.724 [2024-07-24 19:55:05.889143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.724 [2024-07-24 19:55:05.889170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.889282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.889308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.889414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.889440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.889564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.889591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.889721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.889747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.889873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.889898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.890008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.890033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.890134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.890160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.890287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.890314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.890462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.890487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.890647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.890673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.890808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.890834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.890975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.891000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.891109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.891135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.891252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.891279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.891415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.891440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.891575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.891603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.891732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.891757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.891853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.891879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.892009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.892034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.892185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.892224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.892390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.892418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.892561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.892588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.892721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.892747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.892887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.892913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.893024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.893050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.893163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.893192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.893316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.893342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.893444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.893470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.893597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.893623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.893728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.893754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.893859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.725 [2024-07-24 19:55:05.893886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.725 qpair failed and we were unable to recover it. 00:25:48.725 [2024-07-24 19:55:05.894019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.894045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.894144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.894174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.894294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.894320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.894425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.894452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.894569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.894607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.894750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.894776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.894911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.894939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.895043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.895073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.895209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.895236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.895357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.895383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.895492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.895518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.895635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.895661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.895776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.895801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.895937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.895963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.896075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.896100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.896224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.896282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.896409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.896449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.896567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.896595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.896700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.896728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.896834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.896861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.896998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.897025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.897142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.897170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.897315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.897345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.897453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.897479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.897599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.897626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.897756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.897783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.897912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.897939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.898100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.898126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.898232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.898268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.898396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.898422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.898535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.898561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.898662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.898688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.726 qpair failed and we were unable to recover it. 00:25:48.726 [2024-07-24 19:55:05.898845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.726 [2024-07-24 19:55:05.898871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.898984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.899010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.899154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.899182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.899334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.899361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.899463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.899490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.899620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.899646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.899779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.899806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.899939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.899966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.900081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.900109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.900252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.900291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.900400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.900425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.900547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.900573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.900706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.900732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.900836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.900862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.900993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.901021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.901152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.901179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.901315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.901342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.901467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.901499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.901631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.901658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.901762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.901788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.901916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.901944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.902103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.902130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.902257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.902299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.902448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.902475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.902597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.902624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.902755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.902782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.902944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.902970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.903099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.903126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.903261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.903288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.903434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.903461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.903565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.903591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.903726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.903752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.903904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.727 [2024-07-24 19:55:05.903931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.727 qpair failed and we were unable to recover it. 00:25:48.727 [2024-07-24 19:55:05.904093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.904119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.904257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.904286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.904437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.904463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.904602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.904629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.904737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.904765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.904897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.904926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.905030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.905056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.905159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.905186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.905332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.905372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.905486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.905520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.905651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.905678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.905808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.905834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.905964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.905990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.906126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.906152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.906282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.906309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.906467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.906493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.906630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.906656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.906778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.906805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.906938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.906964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.907068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.907094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.907203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.907231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.907376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.907403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.907541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.907568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.907688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.907715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.907856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.907883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.908015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.908042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.908148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.908175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.728 qpair failed and we were unable to recover it. 00:25:48.728 [2024-07-24 19:55:05.908287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.728 [2024-07-24 19:55:05.908315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.908421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.908447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.908551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.908577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.908689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.908716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.908827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.908854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.908990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.909018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.909158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.909185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.909313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.909353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.909466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.909494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.909650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.909681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.909812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.909839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.909951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.909978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.910082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.910109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.910204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.910230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.910349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.910376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.910491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.910518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.910674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.910702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.910833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.910860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.910966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.910994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.911147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.911175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.911322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.911350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.911464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.911491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.729 qpair failed and we were unable to recover it. 00:25:48.729 [2024-07-24 19:55:05.911623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.729 [2024-07-24 19:55:05.911650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.911818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.911845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.911953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.911980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.912088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.912116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.912224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.912256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.912371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.912399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.912542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.912569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.912704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.912730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.912833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.912860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.912966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.912992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.913121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.913147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.913251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.913278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.913390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.913416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.913533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.913560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.913701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.913727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.913835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.913864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.913997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.914024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.914158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.914185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.914289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.914317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.914433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.914460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.914564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.914590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.914741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.914767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.914880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.914906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.915021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.915048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.915166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.915192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.915332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.915359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.915466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.915493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.915640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.915667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.915799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.730 [2024-07-24 19:55:05.915825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.730 qpair failed and we were unable to recover it. 00:25:48.730 [2024-07-24 19:55:05.915990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.916017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.916156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.916183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.916318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.916344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.916462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.916488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.916605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.916632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.916741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.916767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.916914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.916941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.917078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.917105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.917206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.917232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.917353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.917380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.917484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.917510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.917667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.917693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.917807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.917834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.917945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.917972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.918140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.918167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.918297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.918325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.918427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.918454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.918576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.918602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.918760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.918787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.918892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.918918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.919033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.919059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.919198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.919225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.919372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.919399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.919506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.919533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.919662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.919688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.919846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.919872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.919989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.920017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.920149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.920176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.920291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.920319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.920428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.920454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.920583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.920609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.920730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.920756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.920914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.920940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.731 [2024-07-24 19:55:05.921044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.731 [2024-07-24 19:55:05.921070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.731 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.921205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.921251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.921365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.921393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.921530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.921557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.921657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.921684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.921792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.921818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.921951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.921983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.922104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.922130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.922277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.922304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.922441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.922467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.922573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.922600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.922735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.922762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.922898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.922924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.923022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.923048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.923179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.923205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.923324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.923362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.923508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.923535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.923696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.923722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.923855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.923881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.924014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.924041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.924186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.924212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.924338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.924365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.924497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.924523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.924630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.924657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.924769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.924797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.924934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.924963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.925122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.925148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.925259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.925289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.925404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.925430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.925567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.925593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.925752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.925777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.925906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.925933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.926068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.926095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.926221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.732 [2024-07-24 19:55:05.926258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.732 qpair failed and we were unable to recover it. 00:25:48.732 [2024-07-24 19:55:05.926366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.926392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.926501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.926526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.926633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.926659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.926765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.926792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.926900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.926926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.927037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.927063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.927169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.927195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.927328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.927355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.927474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.927501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.927665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.927691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.927802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.927828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.927942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.927969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.928083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.928108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.928222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.928258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.928371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.928396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.928527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.928553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.928680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.928705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.928839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.928867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.929002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.929028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.929188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.929214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.929334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.929361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.929483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.929523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.929699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.929727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.929866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.929892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.930003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.930030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.930143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.930170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.930329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.930362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.930471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.930499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.930631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.930657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.930792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.930818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.930956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.930983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.931118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.733 [2024-07-24 19:55:05.931143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.733 qpair failed and we were unable to recover it. 00:25:48.733 [2024-07-24 19:55:05.931310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.931337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.931483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.931510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.931615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.931641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.931776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.931802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.931934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.931962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.932072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.932099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.932232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.932270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.932378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.932404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.932507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.932534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.932639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.932665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.932805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.932833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.932946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.932971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.933131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.933157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.933276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.933303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.933437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.933463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.933581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.933607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.933743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.933769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.933878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.933904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.934004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.934031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.934131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.934156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.934292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.934319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.934423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.934454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.934567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.934594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.934695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.934721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.934850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.934876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.935045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.935074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.935180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.935208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.935324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.734 [2024-07-24 19:55:05.935351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.734 qpair failed and we were unable to recover it. 00:25:48.734 [2024-07-24 19:55:05.935458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.935484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.935596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.935622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.935728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.935754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.935874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.935900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.936007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.936034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.936175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.936201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.936341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.936369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.936483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.936509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.936652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.936678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.936812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.936850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.936950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.936976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.937111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.937138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.937237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.937271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.937418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.937444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.937556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.937583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.937720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.937746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.937882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.937909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.938065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.938092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.938197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.938224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.938341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.938368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.938472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.938512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.938625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.938653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.938828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.938859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.939022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.939050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.939188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.939221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.939361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.939390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.939522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.939549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.939665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.939691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.939853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.939881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.940015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.940042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.940146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.940172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.940304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.940331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.940463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.735 [2024-07-24 19:55:05.940490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.735 qpair failed and we were unable to recover it. 00:25:48.735 [2024-07-24 19:55:05.940651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.940677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.940789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.940815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.940953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.940981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.941147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.941173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.941294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.941323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.941432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.941459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.941583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.941609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.941751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.941778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.941910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.941935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.942064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.942090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.942217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.942250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.942383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.942409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.942547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.942573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.942711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.942736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.942840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.942871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.943018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.943044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.943174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.943199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.943310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.943337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.943496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.943522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.943627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.943654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.943788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.943813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.943916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.943941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.944076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.944101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.944230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.944262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.944372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.944398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.944531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.944556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.944695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.944721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.944831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.944857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.944994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.945034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.945180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.945208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.945356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.945385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.945501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.945528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.736 [2024-07-24 19:55:05.945643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.736 [2024-07-24 19:55:05.945670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.736 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.945805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.945831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.945933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.945960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.946093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.946119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.946250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.946278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.946382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.946410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.946517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.946544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.946677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.946703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.946824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.946853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.946988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.947017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.947144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.947170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.947306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.947335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.947438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.947465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.947586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.947613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.947742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.947768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.947902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.947928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.948064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.948090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.948205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.948230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.948399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.948426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.948563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.948589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.948762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.948788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.948903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.948928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.949039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.949065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.949205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.949231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.949347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.949375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.949483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.949509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.949642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.949669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.949804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.949830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.949963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.949990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.950127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.950154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.950269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.950295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.950428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.950454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.950593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.950618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.950780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.737 [2024-07-24 19:55:05.950805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.737 qpair failed and we were unable to recover it. 00:25:48.737 [2024-07-24 19:55:05.950949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.950975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.951106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.951132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.951270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.951301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.951410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.951436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.951574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.951601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.951736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.951763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.951872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.951898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.952040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.952066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.952169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.952194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.952334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.952361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.952468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.952495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.952613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.952639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.952781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.952807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.952964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.952991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.953094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.953119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.953256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.953283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.953399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.953425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.953533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.953558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.953667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.953692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.953798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.953826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.953957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.953983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.954108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.954134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.954291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.954331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.954469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.954497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.954637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.954663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.954820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.954846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.954980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.955007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.955168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.955194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.955320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.955348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.955486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.955512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.955651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.955676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.955808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.955834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.955968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.955994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.738 [2024-07-24 19:55:05.956123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.738 [2024-07-24 19:55:05.956149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.738 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.956298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.956339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.956458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.956485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.956598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.956625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.956735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.956762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.956866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.956893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.957007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.957033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.957172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.957201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.957315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.957342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.957452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.957478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.957613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.957640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.957767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.957792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.957897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.957923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.958043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.958068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.958211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.958237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.958343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.958369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.958494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.958519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.958651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.958677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.958826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.958861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.958995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.959021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.959127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.959155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.959319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.959348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.959485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.959511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.959623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.959649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.959768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.959794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.959896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.959922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.960085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.960110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.960211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.739 [2024-07-24 19:55:05.960237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.739 qpair failed and we were unable to recover it. 00:25:48.739 [2024-07-24 19:55:05.960352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.960378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.960509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.960535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.960666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.960691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.960804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.960830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.960961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.960986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.961108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.961134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.961300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.961326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.961428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.961453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.961557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.961582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.961725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.961755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.961884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.961910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.962069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.962095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.962231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.962264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.962373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.962400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.962506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.962532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.962688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.962714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.962856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.962881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.963007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.963032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.963137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.963164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.963300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.963327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.963443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.963470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.963566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.963591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.740 [2024-07-24 19:55:05.963698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.740 [2024-07-24 19:55:05.963724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.740 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.963863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.963889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.963997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.964022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.964157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.964184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.964360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.964388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.964522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.964548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.964648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.964675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.964785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.964811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.964971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.964998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.965131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.965157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.965288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.965314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.965417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.965444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.965578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.965605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.965739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.965766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.965875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.965905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.966042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.966069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.966202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.966228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.966339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.966366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.966502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.966529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.966687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.966713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.966860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.966905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.967039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.967067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.967201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.967227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.967340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.967367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.967529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.967555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.967688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.967715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.967852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.967880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.968017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.968044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.968179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.968206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.968314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.968341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.968446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.968472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.968603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.968629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.968758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.968784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.741 [2024-07-24 19:55:05.968921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.741 [2024-07-24 19:55:05.968946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.741 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.969080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.969107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.969249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.969277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.969390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.969416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.969520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.969546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.969655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.969682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.969810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.969836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.969965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.969992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.970121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.970147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.970287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.970314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.970460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.970486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.970646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.970672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.970834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.970860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.970996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.971023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.971183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.971209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.971328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.971355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.971457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.971485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.971651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.971677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.971785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.971813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.971915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.971942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.972050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.972076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.972236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.972278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.972395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.972422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.972533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.972559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.972693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.972721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.972849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.972875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.973000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.973026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.973185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.973211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.973357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.973384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.973496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.973523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.973683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.973709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.973838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.973864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.973997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.974023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.974136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.974162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.742 [2024-07-24 19:55:05.974278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.742 [2024-07-24 19:55:05.974305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.742 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.974435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.974461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.974566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.974592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.974722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.974748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.974849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.974874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.975031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.975057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.975161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.975187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.975348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.975375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.975515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.975541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.975671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.975698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.975816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.975844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.975984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.976010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.976122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.976150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.976279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.976329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.976449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.976476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.976608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.976640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.976772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.976799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.976933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.976960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.977090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.977116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.977252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.977279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.977415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.977441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.977579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.977606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.977741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.977768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.977879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.977905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.978068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.978095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.978202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.978227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.978352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.978378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.978485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.978512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.978626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.978653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.978792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.978819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.978925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.978952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.979111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.979138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.979267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.979294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.979455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.979482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.743 qpair failed and we were unable to recover it. 00:25:48.743 [2024-07-24 19:55:05.979617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.743 [2024-07-24 19:55:05.979644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.979776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.979803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.979960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.979987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.980087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.980114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.980253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.980280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.980388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.980414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.980524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.980551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.980693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.980720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.980902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.980929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.981053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.981079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.981238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.981271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.981376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.981403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.981571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.981597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.981706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.981733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.981857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.981883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.981987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.982013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.982178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.982205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.982337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.982363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.982510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.982538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.982674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.982700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.982838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.982865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.982997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.983029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.983124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.983151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.983254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.983281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.983443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.983469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.983605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.983631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.983751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.983778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.983912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.983938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.984064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.984090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.984210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.984235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.984366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.984391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.984534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.984560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.984686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.984711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.984813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.744 [2024-07-24 19:55:05.984840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.744 qpair failed and we were unable to recover it. 00:25:48.744 [2024-07-24 19:55:05.984957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.984983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.985107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.985133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.985267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.985294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.985428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.985453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.985557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.985583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.985685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.985710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.985841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.985867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.986010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.986037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.986186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.986226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.986389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.986430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.986568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.986596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.986723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.986749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.986850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.986876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.987041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.987067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.987204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.987230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.987380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.987406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.987539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.987565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.987666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.987692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.987821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.987846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.987988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.988014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.988158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.988187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.988331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.988358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.988461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.988487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.988649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.988675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.988781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.988806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.988909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.988935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.989037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.989062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.989205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.745 [2024-07-24 19:55:05.989252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.745 qpair failed and we were unable to recover it. 00:25:48.745 [2024-07-24 19:55:05.989369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.989397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.989530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.989557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.989688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.989714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.989833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.989860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.989974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.990002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.990137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.990166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.990279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.990305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.990467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.990493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.990601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.990627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.990770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.990796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.990925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.990951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.991084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.991110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.991207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.991232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.991348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.991378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.991511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.991538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.991641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.991668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.991832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.991857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.991963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.991988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.992114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.992139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.992253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.992279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.992392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.992419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.992521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.992546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.992676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.992702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.992844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.992870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.992982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.993007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.993118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.993143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.993254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.993281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.993420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.993461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.993585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.993624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.993737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.993765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.993872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.993898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.994034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.994060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.994221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.994253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.994394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.746 [2024-07-24 19:55:05.994420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.746 qpair failed and we were unable to recover it. 00:25:48.746 [2024-07-24 19:55:05.994527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.994553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.994696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.994724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.994836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.994863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.995024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.995051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.995159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.995185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.995322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.995349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.995506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.995532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.995641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.995666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.995777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.995802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.995927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.995952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.996086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.996113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.996262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.996288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.996442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.996467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.996604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.996630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.996762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.996787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.996901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.996927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.997064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.997089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.997229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.997262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.997395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.997421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.997553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.997578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.997711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.997737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.997866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.997893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.998017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.998042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.998148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.998174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.998279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.998305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.998462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.998488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.998655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.998681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.998816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.998842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.998948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.998975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.999079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.999104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.999217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.999249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.999357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.999383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.999486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.999513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.747 [2024-07-24 19:55:05.999644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.747 [2024-07-24 19:55:05.999670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.747 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:05.999801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:05.999826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:05.999987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.000014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.000124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.000151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.000295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.000335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.000453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.000481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.000587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.000614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.000720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.000747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.000877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.000904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.001010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.001037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.001170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.001196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.001309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.001336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.001468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.001495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.001629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.001656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.001828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.001855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.001990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.002016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.002153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.002180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.002296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.002323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.002455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.002481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.002614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.002641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.002777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.002804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.002902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.002928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.003095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.003121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.003251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.003278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.003418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.003445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.003608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.003634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.003761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.003788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.003924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.003955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.004090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.004116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.004260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.004287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.004418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.004445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.004606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.004632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.004763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.004789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.004900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.748 [2024-07-24 19:55:06.004928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.748 qpair failed and we were unable to recover it. 00:25:48.748 [2024-07-24 19:55:06.005062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.005091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.005228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.005271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.005392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.005419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.005525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.005551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.005693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.005720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.005858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.005885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.005997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.006024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.006140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.006167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.006279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.006306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.006404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.006430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.006562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.006589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.006719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.006745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.006884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.006910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.007073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.007100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.007198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.007224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.007334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.007361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.007455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.007481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.007612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.007638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.007764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.007791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.007926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.007952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.008091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.008117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.008234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.008276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.008410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.008436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.008597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.008623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.008725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.008752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.008854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.008880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.009010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.009037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.009196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.009223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.009334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.009361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.009492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.009518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.009655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.009682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.009847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.009873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.009985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.010011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.010113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.749 [2024-07-24 19:55:06.010139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.749 qpair failed and we were unable to recover it. 00:25:48.749 [2024-07-24 19:55:06.010269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.010302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.010436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.010462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.010615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.010656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.010799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.010828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.010933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.010960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.011072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.011100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.011233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.011269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.011369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.011395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.011557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.011584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.011717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.011744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.011857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.011883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.012050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.012076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.012232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.012272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.012408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.012434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.012585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.012613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.012780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.012807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.012919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.012946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.013049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.013076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.013188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.013214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.013363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.013392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.013561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.013588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.013717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.013743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.013876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.013902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.014063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.014089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.014193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.014219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.014367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.014393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.014534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.014562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.014700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.014727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.750 [2024-07-24 19:55:06.014861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.750 [2024-07-24 19:55:06.014888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.750 qpair failed and we were unable to recover it. 00:25:48.751 [2024-07-24 19:55:06.015021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.751 [2024-07-24 19:55:06.015048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.751 qpair failed and we were unable to recover it. 00:25:48.751 [2024-07-24 19:55:06.015151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.751 [2024-07-24 19:55:06.015177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.751 qpair failed and we were unable to recover it. 00:25:48.751 [2024-07-24 19:55:06.015306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.751 [2024-07-24 19:55:06.015333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.751 qpair failed and we were unable to recover it. 00:25:48.751 [2024-07-24 19:55:06.015469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.751 [2024-07-24 19:55:06.015496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.751 qpair failed and we were unable to recover it. 00:25:48.751 [2024-07-24 19:55:06.015609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.751 [2024-07-24 19:55:06.015636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.751 qpair failed and we were unable to recover it. 00:25:48.751 [2024-07-24 19:55:06.015794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.751 [2024-07-24 19:55:06.015821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:48.751 qpair failed and we were unable to recover it. 00:25:48.751 [2024-07-24 19:55:06.015963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.751 [2024-07-24 19:55:06.015991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.751 qpair failed and we were unable to recover it. 00:25:48.751 [2024-07-24 19:55:06.016103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.751 [2024-07-24 19:55:06.016129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.751 qpair failed and we were unable to recover it. 00:25:48.751 [2024-07-24 19:55:06.016234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.751 [2024-07-24 19:55:06.016266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.751 qpair failed and we were unable to recover it. 00:25:48.751 [2024-07-24 19:55:06.016388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.751 [2024-07-24 19:55:06.016415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:48.751 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.016552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.016580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.016725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.016756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.016861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.016889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.017001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.017027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.017143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.017169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.017285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.017312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.017418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.017445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.017683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.017710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.017848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.017882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.018038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.018064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.018172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.018198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.018333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.018361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.018497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.018525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.018630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.018657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.018822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.018849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.018976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.019003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.019114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.019140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.019282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.019310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.019417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.019443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.019546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.037 [2024-07-24 19:55:06.019571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.037 qpair failed and we were unable to recover it. 00:25:49.037 [2024-07-24 19:55:06.019706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.019733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.019861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.019887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.019993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.020018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.020170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.020209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.020362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.020391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.020524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.020551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.020681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.020707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.020842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.020871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.020981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.021009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.021131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.021158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.021269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.021297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.021437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.021463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.021590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.021617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.021746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.021773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.021907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.021933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.022045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.022074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.022184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.022209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.022349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.022375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.022474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.022500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.022628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.022653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.022780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.022806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.022920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.022946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.023086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.023112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.023223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.023260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.023364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.023389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.023525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.023550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.023660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.023686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.023816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.023842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.024004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.024030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.024158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.024184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.024973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.025004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.025152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.025179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.025292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.025320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.038 qpair failed and we were unable to recover it. 00:25:49.038 [2024-07-24 19:55:06.025434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.038 [2024-07-24 19:55:06.025460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.025575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.025611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.025751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.025777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.025922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.025959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.026060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.026085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.026194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.026226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.026340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.026367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.026475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.026502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.026663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.026689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.026820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.026846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.026974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.026999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.027096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.027121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.027230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.027265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.027430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.027456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.027620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.027647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.027777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.027804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.027904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.027933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.028071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.028098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.028209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.028234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.028381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.028407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.028512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.028538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.028666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.028692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.028806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.028831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.028936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.028962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.029067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.029093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.029197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.029224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.029346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.029372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.029480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.029506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.029617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.029643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.029779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.029804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.029913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.029940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.030053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.030079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.030222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.030253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.030414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.030441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.039 qpair failed and we were unable to recover it. 00:25:49.039 [2024-07-24 19:55:06.030545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.039 [2024-07-24 19:55:06.030571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.030706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.030732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.030836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.030861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.030988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.031014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.031117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.031142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.031266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.031298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.031430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.031456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.031551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.031577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.031683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.031709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.031840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.031869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.032017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.032043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.032172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.032200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.032342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.032368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.032503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.032530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.032670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.032695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.032825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.032850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.033616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.033647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.033823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.033851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.033983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.034010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.034129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.034154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.034290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.034317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.034477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.034503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.034615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.034641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.034780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.034807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.034941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.034966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.035073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.035099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.035215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.035255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.035362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.035388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.035562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.035589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.035700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.035726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.035826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.035853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.035988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.036014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.036156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.036182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.036297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.040 [2024-07-24 19:55:06.036324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.040 qpair failed and we were unable to recover it. 00:25:49.040 [2024-07-24 19:55:06.036425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.036451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.036578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.036604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.036724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.036749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.036914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.036940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.037048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.037075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.037209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.037235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.037359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.037385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.037493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.037518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.037623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.037649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.037771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.037797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.037895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.037920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.038022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.038048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.038152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.038179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.038311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.038337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.038442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.038466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.038568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.038594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.038698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.038727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.038850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.038877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.038984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.039009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.039114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.039141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.039290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.039316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.039419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.039445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.039581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.039607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.039747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.039773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.039874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.039900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.040038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.040063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.040227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.040260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.040379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.040406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.040516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.040542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.040678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.040704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.040813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.040838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.040945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.040971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.041133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.041 [2024-07-24 19:55:06.041159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.041 qpair failed and we were unable to recover it. 00:25:49.041 [2024-07-24 19:55:06.041284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.041311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.041412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.041438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.041572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.041599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.041742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.041767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.041927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.041953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.042060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.042086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.042253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.042279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.042415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.042440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.042576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.042602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.042700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.042726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.042885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.042914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.043075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.043102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.043264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.043290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.043426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.043451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.043562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.043589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.043728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.043754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.043864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.043891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.044023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.044050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.044186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.044213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.044369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.044396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.044522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.044548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.042 qpair failed and we were unable to recover it. 00:25:49.042 [2024-07-24 19:55:06.044661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.042 [2024-07-24 19:55:06.044687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.044792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.044818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.044928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.044954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.045121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.045148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.045280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.045307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.045425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.045451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.045583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.045609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.045742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.045768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.045909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.045935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.046082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.046108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.046253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.046280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.046394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.046420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.046531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.046556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.046701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.046727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.046862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.046889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.046993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.047018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.047176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.047202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.047341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.047369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.047480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.047505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.047659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.047685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.047793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.047819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.047953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.047980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.048121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.048148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.048285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.048312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.048425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.048451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.048556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.048583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.048718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.048745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.048881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.048909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.049057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.049083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.049223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.049257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.049402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.049432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.049531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.049557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.049716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.049742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.049873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.043 [2024-07-24 19:55:06.049900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.043 qpair failed and we were unable to recover it. 00:25:49.043 [2024-07-24 19:55:06.050026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.050051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.050207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.050233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.050359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.050385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.050520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.050546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.050678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.050704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.050809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.050835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.050971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.050996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.051121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.051146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.051305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.051332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.051436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.051462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.051627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.051653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.051792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.051818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.051948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.051974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.052108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.052135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.052273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.052300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.052401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.052428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.052560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.052586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.052726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.052750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.052881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.052907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.053006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.053031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.053144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.053170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.053316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.053342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.053442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.053469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.053598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.053627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.053763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.053789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.053926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.053952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.054084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.054111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.054254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.054281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.054402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.054428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.054561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.054587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.054718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.054744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.054852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.054878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.055014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.055040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.055189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.044 [2024-07-24 19:55:06.055215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.044 qpair failed and we were unable to recover it. 00:25:49.044 [2024-07-24 19:55:06.055378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.055405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.055518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.055543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.055702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.055728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.055868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.055896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.056014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.056039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.056200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.056226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.056329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.056355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.056461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.056487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.056622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.056648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.056780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.056806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.056912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.056938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.057095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.057122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.057258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.057284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.057389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.057416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.057528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.057554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.057671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.057698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.057837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.057863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.057976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.058002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.058107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.058133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.058235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.058268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.058375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.058402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.058533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.058558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.058696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.058723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.058860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.058886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.059019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.059046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.059170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.059196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.059321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.059349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.059490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.059516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.059631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.059656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.059760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.059786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.059894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.059925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.060033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.060060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.060219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.060252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.045 [2024-07-24 19:55:06.060352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.045 [2024-07-24 19:55:06.060378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.045 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.060512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.060538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.060637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.060662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.060798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.060823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.060928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.060954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.061049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.061076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.061210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.061236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.061356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.061382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.061490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.061516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.061649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.061675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.061779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.061804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.061965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.061991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.062096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.062121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.062260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.062286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.062416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.062442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.062548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.062574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.062683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.062710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.062837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.062863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.063028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.063054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.063160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.063187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.063324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.063350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.063486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.063511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.063621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.063647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.063807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.063832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.063943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.063969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.064122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.064149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.064290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.064317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.064419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.064445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.064583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.064608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.064766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.064792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.064924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.064951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.065064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.065089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.065220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.065254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.065396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.065422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.046 [2024-07-24 19:55:06.065558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.046 [2024-07-24 19:55:06.065583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.046 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.065718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.065743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.065872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.065897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.066031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.066057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.066169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.066195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.066313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.066340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.066481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.066507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.066667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.066692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.066860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.066885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.067018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.067044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.067202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.067227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.067390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.067417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.067559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.067585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.067687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.067712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.067843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.067868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.067978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.068003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.068106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.068132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.068291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.068318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.068433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.068458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.068586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.068612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.068768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.068802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.068905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.068930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.069033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.069060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.069220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.069251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.069397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.069423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.069586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.069612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.069742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.069767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.069929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.069955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.070088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.070115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.070249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.070275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.070413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.070439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.070581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.070612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.047 [2024-07-24 19:55:06.070748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.047 [2024-07-24 19:55:06.070773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.047 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.070886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.070912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.071020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.071046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.071156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.071182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.071346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.071374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.071541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.071567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.071672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.071698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.071828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.071854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.071993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.072019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.072145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.072171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.072307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.072334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.072445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.072471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.072610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.072638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.072749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.072776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.072912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.072939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.073071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.073098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.073260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.073286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.073394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.073420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.073579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.073606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.073765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.073791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.073900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.073927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.074059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.074086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.074221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.074253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.074371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.074396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.074525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.074556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.074680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.074706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.074836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.074862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.074975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.075001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.075133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.075159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.075267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.075295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.075430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.048 [2024-07-24 19:55:06.075456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.048 qpair failed and we were unable to recover it. 00:25:49.048 [2024-07-24 19:55:06.075589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.075615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.075726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.075753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.075868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.075894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.076025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.076051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.076154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.076181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.076317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.076343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.076472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.076497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.076638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.076664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.076776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.076803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.076931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.076961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.077066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.077092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.077194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.077220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.077345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.077371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.077480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.077507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.077642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.077670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.077804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.077832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.077967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.077994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.078129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.078155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.078288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.078316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.078427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.078453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.078574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.078601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.078726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.078752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.078891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.078918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.079055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.079083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.079205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.079231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.079416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.079442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.079549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.079575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.079731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.079757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.079868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.079896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.080027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.080055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.080164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.080190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.080325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.080352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.080487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.080514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.049 qpair failed and we were unable to recover it. 00:25:49.049 [2024-07-24 19:55:06.080642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.049 [2024-07-24 19:55:06.080669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.080795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.080821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.080953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.080980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.081088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.081119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.081256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.081283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.081409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.081436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.081578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.081605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.081739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.081765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.081870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.081897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.082031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.082057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.082214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.082239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.082422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.082448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.082555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.082581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.082678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.082704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.082814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.082840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.082973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.082998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.083137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.083164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.083305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.083332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.083436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.083462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.083600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.083627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.083784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.083810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.083948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.083975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.084106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.084133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.084292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.084317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.084450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.084477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.084618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.084645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.084778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.084804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.084971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.084998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.085126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.085152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.085309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.085335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.085471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.085497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.085640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.085666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.085794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.085820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.085950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.050 [2024-07-24 19:55:06.085977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.050 qpair failed and we were unable to recover it. 00:25:49.050 [2024-07-24 19:55:06.086076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.086102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.086228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.086262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.086423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.086449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.086572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.086598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.086708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.086735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.086844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.086871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.087004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.087031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.087170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.087196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.087351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.087377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.087537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.087564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.087699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.087729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.087847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.087873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.087978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.088005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.088107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.088133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.088262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.088302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.088459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.088484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.088600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.088627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.088728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.088754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.088915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.088941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.089098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.089125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.089258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.089285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.089417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.089442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.089584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.089612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.089770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.089797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.089907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.089934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.090068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.090095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.090195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.090221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.090350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.090376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.090481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.090506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.090629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.090655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.090788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.090814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.090924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.090950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.091076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.091102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.051 qpair failed and we were unable to recover it. 00:25:49.051 [2024-07-24 19:55:06.091205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.051 [2024-07-24 19:55:06.091232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.091395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.091422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.091524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.091550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.091705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.091731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.091837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.091866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.091979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.092005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.092109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.092136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.092268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.092300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.092439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.092465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.092602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.092628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.092771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.092797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.092937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.092964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.093096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.093123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.093237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.093270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.093435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.093461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.093604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.093631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.093740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.093767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.093899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.093927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.094067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.094094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.094227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.094261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.094382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.094408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.094514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.094540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.094666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.094693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.094852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.094878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.095010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.095037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.095175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.095203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.095346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.095373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.095478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.095515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.095626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.095653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.095787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.095814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.052 [2024-07-24 19:55:06.095957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.052 [2024-07-24 19:55:06.095984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.052 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.096128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.096155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.096266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.096299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.096455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.096482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.096620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.096647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.096778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.096804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.096934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.096961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.097119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.097146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.097301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.097328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.097433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.097459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.097576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.097602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.097708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.097734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.097862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.097889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.097987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.098014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.098227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.098258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.098399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.098428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.098565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.098592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.098751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.098778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.098904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.098931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.099100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.099127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.099270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.099301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.099467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.099493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.099590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.099616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.099748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.099774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.099908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.099935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.100053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.100079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.100211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.053 [2024-07-24 19:55:06.100237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.053 qpair failed and we were unable to recover it. 00:25:49.053 [2024-07-24 19:55:06.100411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.100437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.100577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.100604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.100747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.100774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.100928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.100955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.101108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.101135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.101275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.101304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.101440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.101466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.101596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.101622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.101757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.101784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.101919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.101946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.102158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.102185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.102300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.102327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.102475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.102501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.102655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.102681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.102813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.102841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.102969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.103000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.103111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.103137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.103298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.103324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.103457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.103483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.103620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.103646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.103775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.103802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.103927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.103954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.104077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.104104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.104240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.104275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.104408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.104434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.104604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.104631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.104760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.104787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.104914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.104940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.105099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.105125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.105262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.105301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.105397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.105423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.105527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.105553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.105684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.105711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.054 qpair failed and we were unable to recover it. 00:25:49.054 [2024-07-24 19:55:06.105844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.054 [2024-07-24 19:55:06.105870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.106005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.106031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.106158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.106185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.106327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.106353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.106512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.106538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.106674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.106701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.106836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.106862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.106977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.107003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.107118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.107144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.107271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.107301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.107421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.107447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.107557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.107583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.107688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.107715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.107830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.107857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.107990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.108017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.108125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.108151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.108289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.108317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.108429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.108456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.108569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.108595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.108731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.108758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.108872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.108898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.109002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.109029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.109156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.109182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.109314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.109344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.109480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.109506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.109681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.109708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.109819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.109845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.109977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.110003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.110127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.110153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.110286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.110312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.110414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.110439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.110552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.110578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.110689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.110715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.110822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.055 [2024-07-24 19:55:06.110849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.055 qpair failed and we were unable to recover it. 00:25:49.055 [2024-07-24 19:55:06.110983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.111009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.111143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.111169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.111280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.111306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.111417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.111442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.111570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.111596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.111734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.111760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.111892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.111919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.112031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.112057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.112216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.112248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.112365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.112392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.112550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.112576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.112715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.112741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.112859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.112885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.112982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.113008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.113142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.113169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.113291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.113318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.113478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.113504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.113627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.113655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.113787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.113814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.113926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.113953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.114110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.114137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.114301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.114326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.114448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.114473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.114580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.114607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.114746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.114772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.114914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.114941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.115078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.115104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.115232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.115277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.115437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.115463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.115604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.115630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.115744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.115770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.115904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.115930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.116065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.056 [2024-07-24 19:55:06.116090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.056 qpair failed and we were unable to recover it. 00:25:49.056 [2024-07-24 19:55:06.116251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.116279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.116411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.116437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.116567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.116593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.116707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.116734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.116842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.116869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.116964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.116990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.117117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.117144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.117269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.117296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.117421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.117447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.117559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.117585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.117720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.117745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.117851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.117878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.117992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.118018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.118126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.118152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.118286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.118313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.118440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.118466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.118566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.118592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.118759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.118785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.118919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.118945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.119052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.119079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.119246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.119275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.119408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.119435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.119543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.119569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.119670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.119697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.119832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.119861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.119975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.120001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.120101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.120126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.120259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.120286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.120442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.120468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.120575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.120602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.120705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.120732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.120860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.120886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.121019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.057 [2024-07-24 19:55:06.121045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.057 qpair failed and we were unable to recover it. 00:25:49.057 [2024-07-24 19:55:06.121181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.121207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.121377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.121405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.121532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.121558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.121665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.121691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.121788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.121815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.121945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.121971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.122107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.122133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.122259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.122286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.122393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.122420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.122525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.122550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.122675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.122701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.122834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.122860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.122965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.122990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.123097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.123123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.123234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.123284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.123421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.123449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.123555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.123581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.123711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.123737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.123893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.123919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.124035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.124061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.124222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.124256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.124386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.124412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.124573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.124599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.124737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.124764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.124898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.124925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.125058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.125084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.125193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.125220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.125344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.058 [2024-07-24 19:55:06.125371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.058 qpair failed and we were unable to recover it. 00:25:49.058 [2024-07-24 19:55:06.125503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.125529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.125663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.125689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.125829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.125854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.125982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.126008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.126166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.126193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.126342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.126370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.126479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.126505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.126630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.126658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.126761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.126788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.126942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.126968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.127103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.127131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.127270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.127299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.127410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.127436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.127575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.127602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.127736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.127762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.127864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.127891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.128029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.128055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.128167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.128194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.128318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.128344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.128482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.128509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.128631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.128656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.128763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.128790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.128926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.128951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.129119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.129144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.129277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.129306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.129441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.129467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.129630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.129656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.129756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.129783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.129892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.129918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.130048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.130075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.130177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.130204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.130371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.130402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.130514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.059 [2024-07-24 19:55:06.130541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.059 qpair failed and we were unable to recover it. 00:25:49.059 [2024-07-24 19:55:06.130682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.130707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.130813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.130840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.130998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.131024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.131154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.131179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.131283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.131310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.131451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.131477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.131572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.131599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.131699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.131726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.131834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.131862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.131963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.131989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.132120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.132147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.132259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.132285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.132395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.132421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.132562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.132588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.132702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.132727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.132864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.132890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.133039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.133065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.133210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.133236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.133402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.133429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.133543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.133569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.133682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.133709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.133815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.133841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.133974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.134001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.134158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.134185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.134281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.134307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.134411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.134438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.134587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.134613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.134721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.134749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.134885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.134911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.135038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.135065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.135218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.135251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.135363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.135389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.135496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.135522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.135626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.060 [2024-07-24 19:55:06.135652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.060 qpair failed and we were unable to recover it. 00:25:49.060 [2024-07-24 19:55:06.135783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.135809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.135951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.135977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.136110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.136138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.136282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.136309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.136421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.136447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.136584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.136614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.136738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.136765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.136900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.136926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.137061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.137088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.137221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.137256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.137403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.137428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.137541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.137567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.137668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.137695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.137854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.137881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.137981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.138009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.138123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.138150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.138256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.138293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.138431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.138457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.138615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.138642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.138756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.138782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.138931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.138957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.139119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.139146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.139283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.139310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.139451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.139476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.139624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.139653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.139788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.139815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.139917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.139944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.140073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.140100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.140200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.140229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.140397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.140424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.140550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.140577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.140707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.140734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.061 [2024-07-24 19:55:06.140867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.061 [2024-07-24 19:55:06.140898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.061 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.141035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.141061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.141194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.141221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.141337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.141363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.141505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.141532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.141696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.141723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.141879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.141906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.142039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.142068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.142201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.142228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.142378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.142404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.142535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.142562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.142688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.142715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.142848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.142876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.142983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.143010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.143156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.143184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.143287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.143314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.143424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.143451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.143554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.143581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.143703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.143730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.143852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.143879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.144009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.144035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.144193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.144219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.144359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.144385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.144519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.144546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.144683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.144709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.144836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.144863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.144962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.144989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.145126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.145153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.145285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.145312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.145450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.145476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.145636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.145662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.145790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.145816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.145976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.062 [2024-07-24 19:55:06.146003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.062 qpair failed and we were unable to recover it. 00:25:49.062 [2024-07-24 19:55:06.146099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.146126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.146237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.146271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.146431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.146457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.146580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.146607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.146730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.146757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.146893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.146921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.147048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.147074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.147207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.147234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.147354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.147385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.147488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.147514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.147673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.147700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.147800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.147826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.147962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.147988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.148134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.148161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.148292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.148319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.148451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.148478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.148614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.148641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.148783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.148808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.148936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.148962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.149086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.149112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.149252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.149279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.149380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.149406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.149538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.149563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.149692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.149718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.149822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.149849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.149982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.150008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.150138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.150164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.150282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.150308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.150446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.150473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.150609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.150635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.150749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.150775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.150908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.150935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.151078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.151104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.063 [2024-07-24 19:55:06.151238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.063 [2024-07-24 19:55:06.151272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.063 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.151429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.151456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.151582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.151612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.151769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.151796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.151959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.151986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.152096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.152123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.152271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.152298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.152434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.152461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.152566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.152592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.152705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.152732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.152887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.152913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.153020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.153046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.153164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.153190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.153332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.153360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.153470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.153496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.153607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.153632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.153776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.153804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.153940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.153966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.154104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.154131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.154257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.154284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.154398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.154425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.154560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.154586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.154715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.154741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.154914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.154941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.155050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.155077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.155184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.155210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.155379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.155407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.155568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.155595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.155724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.064 [2024-07-24 19:55:06.155750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.064 qpair failed and we were unable to recover it. 00:25:49.064 [2024-07-24 19:55:06.155904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.155930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.156072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.156098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.156234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.156269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.156416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.156442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.156553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.156579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.156683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.156709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.156864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.156891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.156993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.157019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.157127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.157154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.157292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.157319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.157434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.157461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.157626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.157651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.157811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.157837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.157972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.157998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.158155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.158185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.158320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.158347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.158465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.158492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.158622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.158648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.158814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.158840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.158986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.159012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.159123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.159149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.159283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.159311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.159450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.159476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.159604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.159630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.159735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.159763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.159899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.159924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.160031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.160059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.160217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.160253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.160382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.160409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.160568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.160595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.160711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.160737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.160897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.160922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.065 qpair failed and we were unable to recover it. 00:25:49.065 [2024-07-24 19:55:06.161035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.065 [2024-07-24 19:55:06.161063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.161199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.161224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.161388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.161414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.161551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.161578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.161686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.161712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.161850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.161877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.162005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.162031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.162141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.162166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.162274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.162301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.162401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.162428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.162539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.162566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.162726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.162752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.162887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.162913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.163047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.163073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.163224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.163256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.163386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.163412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.163549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.163576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.163712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.163737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.163872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.163898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.164058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.164084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.164215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.164249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.164369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.164396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.164532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.164558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.164664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.164691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.164817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.164843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.164946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.164972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.165081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.165109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.165263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.165290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.165388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.165415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.165545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.165571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.165678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.165704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.165819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.165846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.165987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.166013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.066 [2024-07-24 19:55:06.166150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.066 [2024-07-24 19:55:06.166177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.066 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.166316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.166343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.166447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.166472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.166607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.166634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.166771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.166798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.166952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.166979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.167119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.167145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.167252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.167279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.167389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.167416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.167517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.167544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.167678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.167705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.167851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.167877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.167985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.168012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.168141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.168167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.168263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.168289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.168419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.168447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.168574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.168599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.168702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.168733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.168892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.168918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.169053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.169080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.169206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.169232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.169381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.169407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.169538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.169565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.169699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.169725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.169858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.169886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.170042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.170069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.170177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.170203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.170362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.170390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.170501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.170527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.170633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.170659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.170794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.170819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.170955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.170982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.171115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.171142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.171273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.067 [2024-07-24 19:55:06.171300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.067 qpair failed and we were unable to recover it. 00:25:49.067 [2024-07-24 19:55:06.171408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.171433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.171569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.171596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.171711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.171737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.171870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.171897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.172032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.172058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.172164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.172190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.172344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.172370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.172533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.172559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.172718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.172745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.172858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.172884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.172994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.173021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.173151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.173178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.173290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.173317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.173476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.173502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.173631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.173658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.173790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.173816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.173947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.173973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.174073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.174099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.174206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.174233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.174385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.174411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.174572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.174598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.174731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.174757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.174871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.174897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.175053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.175079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.175237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.175287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.175454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.175482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.175647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.175674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.175812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.175838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.175977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.176005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.176139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.176166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.176275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.176303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.176432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.176458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.176604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.176631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.068 [2024-07-24 19:55:06.176761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.068 [2024-07-24 19:55:06.176788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.068 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.176948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.176974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.177109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.177135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.177239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.177276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.177417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.177445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.177562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.177589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.177716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.177743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.177869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.177895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.178051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.178077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.178211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.178240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.178382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.178409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.178521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.178547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.178704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.178731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.178828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.178854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.178946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.178974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.179110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.179136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.179269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.179297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.179401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.179428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.179580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.179609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.179740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.179768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.179900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.179927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.180060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.180087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.180250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.180279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.180388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.180415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.180545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.180572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.180675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.180702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.180813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.180841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.180971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.180998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.181129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.181156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.069 [2024-07-24 19:55:06.181263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.069 [2024-07-24 19:55:06.181291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.069 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.181420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.181447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.181579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.181612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.181745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.181773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.181905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.181932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.182094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.182121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.182227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.182266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.182402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.182429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.182540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.182567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.182707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.182734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.182850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.182877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.183008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.183035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.183197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.183224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.183344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.183372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.183536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.183563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.183700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.183728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.183838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.183866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.183986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.184014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.184134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.184162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.184338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.184380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.184519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.184547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.184683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.184710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.184811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.184837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.184976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.185002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.185105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.185132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.185249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.185279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.185419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.185447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.185553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.185581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.185694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.185722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.185864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.185896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.186026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.186067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.186206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.186233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.186375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.186402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.070 [2024-07-24 19:55:06.186528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.070 [2024-07-24 19:55:06.186555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.070 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.186664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.186690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.186826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.186854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.186952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.186979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.187079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.187107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.187247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.187275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.187381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.187408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.187538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.187565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.187667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.187694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.187826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.187854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.187959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.187987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.188098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.188125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.188253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.188281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.188415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.188442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.188549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.188575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.188739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.188766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.188899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.188925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.189026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.189052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.189160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.189188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.189298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.189325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.189465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.189492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.189595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.189622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.189757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.189784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.189910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.189941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.190067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.190094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.190194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.190221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.190368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.190394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.190499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.190525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.190651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.190677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.190808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.190835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.190936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.190963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.191098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.191124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.191256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.191283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.191385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.071 [2024-07-24 19:55:06.191412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.071 qpair failed and we were unable to recover it. 00:25:49.071 [2024-07-24 19:55:06.191517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.191544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.191680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.191707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.191840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.191866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.191987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.192028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.192167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.192196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.192337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.192365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.192471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.192498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.192606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.192633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.192765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.192793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.192957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.192985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.193113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.193140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.193296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.193337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.193480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.193509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.193649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.193677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.193810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.193837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.193947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.193974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.194131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.194164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.194271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.194300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.194429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.194456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.194557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.194583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.194684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.194710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.194818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.194844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.194959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.194987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.195121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.195150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.195286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.195315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.195452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.195481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.195619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.195647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.195800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.195828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.195959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.195986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.196091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.196119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.196266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.196294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.196405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.196434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.196570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.072 [2024-07-24 19:55:06.196596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.072 qpair failed and we were unable to recover it. 00:25:49.072 [2024-07-24 19:55:06.196705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.196732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.196841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.196867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.197000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.197026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.197180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.197207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.197350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.197377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.197506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.197533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.197658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.197685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.197796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.197822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.197927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.197954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.198049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.198076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.198180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.198207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.198325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.198351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.198451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.198477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.198600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.198626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.198749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.198776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.198906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.198932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.199039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.199065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.199228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.199264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.199414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.199454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.199600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.199628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.199790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.199817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.199928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.199955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.200069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.200097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.200236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.200270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.200410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.200437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.200546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.200571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.200740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.200766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.200873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.200900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.201005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.201030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.201147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.201174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.201332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.201358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.201476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.201502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.073 [2024-07-24 19:55:06.201613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.073 [2024-07-24 19:55:06.201639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.073 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.201781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.201808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.201918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.201944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.202076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.202102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.202201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.202227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.202389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.202430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.202576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.202604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.202744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.202772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.202909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.202936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.203042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.203070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.203205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.203233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.203363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.203390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.203526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.203554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.203694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.203720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.203863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.203892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.204059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.204085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.204215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.204249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.204388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.204415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.204557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.204583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.204730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.204771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.204909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.204938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.205069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.205098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.205230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.205265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.205398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.205425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.205560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.205587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.205699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.205725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.074 [2024-07-24 19:55:06.205830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.074 [2024-07-24 19:55:06.205857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.074 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.206012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.206054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.206179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.206219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.206378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.206406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.206517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.206543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.206672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.206699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.206839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.206870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.207012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.207041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.207141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.207168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.207309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.207337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.207451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.207478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.207641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.207668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.207768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.207795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.207928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.207955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.208082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.208122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.208266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.208296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.208406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.208434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.208592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.208620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.208779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.208807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.208919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.208946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.209079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.209107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.209212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.209240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.209364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.209392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.209494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.209522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.209627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.209654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.209757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.209784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.209950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.209978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.210092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.210133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.210270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.210311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.210434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.210463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.210623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.210650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.210781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.210807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.210910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.075 [2024-07-24 19:55:06.210936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.075 qpair failed and we were unable to recover it. 00:25:49.075 [2024-07-24 19:55:06.211070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.211101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.211259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.211287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.211397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.211424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.211534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.211560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.211690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.211717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.211851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.211877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.212010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.212038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.212163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.212204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.212359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.212388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.212531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.212558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.212700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.212727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.212840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.212869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.213005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.213032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.213141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.213169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.213337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.213365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.213472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.213499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.213608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.213634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.213738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.213764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.213912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.213939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.214101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.214128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.214255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.214295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.214450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.214491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.214623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.214664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.214804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.214832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.214936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.214964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.215102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.215129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.215270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.215297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.215438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.215465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.215608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.215639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.215802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.215829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.215940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.215967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.216079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.216108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.216217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.216253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.076 qpair failed and we were unable to recover it. 00:25:49.076 [2024-07-24 19:55:06.216368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.076 [2024-07-24 19:55:06.216396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.216507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.216536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.216697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.216725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.216860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.216888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.217026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.217053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.217211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.217238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.217358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.217386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.217497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.217530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.217662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.217690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.217796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.217823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.217982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.218009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.218139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.218166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.218314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.218355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.218489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.218519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.218615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.218642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.218776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.218803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.218902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.218929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.219054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.219095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.219218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.219251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.219363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.219391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.219496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.219524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.219661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.219688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.219825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.219852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.219988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.220015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.220178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.220205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.220326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.220358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.220472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.220500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.220624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.220653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.220788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.220817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.220951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.220978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.221112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.221140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.221261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.221291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.077 [2024-07-24 19:55:06.221461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.077 [2024-07-24 19:55:06.221488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.077 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.221623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.221650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.221763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.221790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.221921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.221949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.222051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.222077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.222178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.222205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.222342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.222383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.222501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.222530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.222662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.222689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.222818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.222845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.222973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.222999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.223092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.223119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.223230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.223264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.223375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.223403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.223516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.223543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.223674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.223705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.223843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.223870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.223974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.224001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.224143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.224172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.224280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.224308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.224457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.224483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.224619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.224646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.224745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.224771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.224876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.224902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.225044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.225072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.225209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.225236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.225359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.225386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.225486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.225513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.225675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.225702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.225838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.225865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.225996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.226024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.226170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.226196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.226353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.226394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.226501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.078 [2024-07-24 19:55:06.226530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.078 qpair failed and we were unable to recover it. 00:25:49.078 [2024-07-24 19:55:06.226660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.226686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.226814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.226841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.226952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.226979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.227084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.227111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.227258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.227288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.227396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.227425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.227535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.227563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.227668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.227696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.227797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.227829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.227929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.227956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.228088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.228115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.228262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.228304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.228441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.228469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.228600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.228627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.228772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.228799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.228956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.228984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.229112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.229139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.229282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.229310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.229448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.229476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.229606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.229634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.229763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.229791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.229900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.229928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.230093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.230120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.230254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.230282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.230379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.230406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.230510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.230537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.230650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.230678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.230838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.230865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.230969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.230996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.231131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.231158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.231286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.231314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.079 qpair failed and we were unable to recover it. 00:25:49.079 [2024-07-24 19:55:06.231421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.079 [2024-07-24 19:55:06.231449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.231596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.231623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.231782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.231809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.231915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.231943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.232067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.232094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.232238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.232284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.232400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.232428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.232565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.232593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.232699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.232727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.232878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.232919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.233032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.233060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.233188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.233214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.233336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.233365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.233497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.233523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.233680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.233706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.233816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.233843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.233974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.234000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.234104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.234130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.234300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.234327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.234462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.234489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.234625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.234650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.234774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.234800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.234938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.234964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.235098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.235124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.235227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.235261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.235396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.235422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.235549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.235575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.235735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.235761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.235897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.235923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.236048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.236074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.236227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.236277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.236435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.236476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.236625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.236653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.080 [2024-07-24 19:55:06.236782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.080 [2024-07-24 19:55:06.236809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.080 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.236943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.236971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.237132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.237158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.237295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.237323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.237455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.237482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.237614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.237641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.237756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.237783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.237907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.237933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.238094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.238120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.238258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.238285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.238392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.238421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.238560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.238587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.238689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.238716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.238844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.238870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.238962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.238989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.239108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.239138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.239313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.239353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.239480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.239521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.239639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.239668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.239780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.239809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.239944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.239973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.240091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.240120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.240257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.240285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.240427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.240455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.240589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.240616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.240727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.240755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.240865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.240892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.241023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.241050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.241200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.241254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.241363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.241391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.241504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.241531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.241636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.241662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.241796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.241822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.241926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.241952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.242053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.242079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.242190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.242216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.081 qpair failed and we were unable to recover it. 00:25:49.081 [2024-07-24 19:55:06.242380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.081 [2024-07-24 19:55:06.242407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.242552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.242579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.242686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.242713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.242847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.242874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.243014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.243040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.243199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.243226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.243356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.243397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.243522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.243552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.243665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.243693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.243806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.243834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.243965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.243993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.244157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.244184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.244330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.244357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.244470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.244496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.244661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.244688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.244846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.244873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.244983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.245010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.245128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.245155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.245267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.245294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.245409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.245436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.245607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.245634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.245796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.245823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.245955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.245981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.246138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.246165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.246268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.246295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.246403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.246430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.246531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.246558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.246664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.246691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.246854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.246880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.247020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.247051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.247205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.247255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.247379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.247407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.247539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.247566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.247693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.247720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.247860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.247886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.247992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.248019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.248123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.248150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.082 [2024-07-24 19:55:06.248282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.082 [2024-07-24 19:55:06.248310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.082 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.248417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.248444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.248552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.248579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.248681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.248708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.248812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.248839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.248938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.248966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.249081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.249122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.249279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.249319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.249439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.249468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.249602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.249629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.249768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.249796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.249960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.249987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.250137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.250163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.250270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.250297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.250404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.250431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.250531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.250559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.250662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.250689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.250846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.250873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.251008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.251038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.251198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.251229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.251373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.251400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.251532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.251559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.251689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.251716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.251824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.251852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.251985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.252012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.252128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.252155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.252288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.252315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.252418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.252446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.252551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.252577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.252670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.252697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.252854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.252880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.252986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.253013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.253130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.253171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.253319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.253360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.253509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.253538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.253675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.253702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.253819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.253847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.253982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.254010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.083 qpair failed and we were unable to recover it. 00:25:49.083 [2024-07-24 19:55:06.254107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.083 [2024-07-24 19:55:06.254136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.254253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.254281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.254393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.254420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.254557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.254585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.254713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.254740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.254871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.254898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.255000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.255027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.255152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.255179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.255312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.255344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.255452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.255479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.255615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.255643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.255776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.255802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.255925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.255952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.256083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.256111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.256235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.256282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.256394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.256423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.256587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.256614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.256749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.256777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.256939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.256966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.257077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.257104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.257207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.257235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.257374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.257401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.257545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.257572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.257671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.257699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.257793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.257832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.257977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.258004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.258145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.258175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.258333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.258374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.258520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.258553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.258661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.258689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.258820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.258847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.258944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.258971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.259097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.259124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.259226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.084 [2024-07-24 19:55:06.259263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.084 qpair failed and we were unable to recover it. 00:25:49.084 [2024-07-24 19:55:06.259410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.259435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.259568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.259599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.259714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.259740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.259864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.259891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.260021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.260047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.260184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.260210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.260333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.260359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.260518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.260545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.260649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.260676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.260807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.260833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.260937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.260963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.261105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.261144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.261290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.261330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.261471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.261497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.261627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.261653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.261763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.261788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.261897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.261922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.262083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.262111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.262225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.262259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.262396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.262422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.262537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.262563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.262675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.262701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.262863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.262888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.263021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.263048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.263155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.263182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.263292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.263319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.263453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.263478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.263610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.263636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.263782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.263807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.263963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.263988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.264121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.264146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.264281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.264307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.264466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.264493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.264605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.264632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.264761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.264786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.264895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.264921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.265059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.265086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.265193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.085 [2024-07-24 19:55:06.265221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.085 qpair failed and we were unable to recover it. 00:25:49.085 [2024-07-24 19:55:06.265350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.265389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.265502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.265529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.265661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.265687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.265786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.265817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.265955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.265981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.266110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.266135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.266239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.266273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.266377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.266403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.266507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.266534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.266658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.266684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.266821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.266846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.266950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.266976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.267082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.267108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.267255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.267294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.267417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.267445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.267564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.267591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.267754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.267779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.267889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.267915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.268050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.268075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.268211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.268236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.268410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.268448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.268611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.268650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.268785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.268812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.268947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.268973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.269108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.269134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.269276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.269303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.269441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.269467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.269612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.269637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.269771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.269796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.269891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.269915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.270025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.270056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.270191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.270217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.270376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.270416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.270534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.270562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.270693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.270718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.270891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.270917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.271021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.271047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.086 [2024-07-24 19:55:06.271179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.086 [2024-07-24 19:55:06.271204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.086 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.271330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.271357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.271503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.271529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.271636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.271663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.271798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.271824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.271959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.271985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.272123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.272149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.272264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.272292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.272440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.272466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.272603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.272630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.272765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.272793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.272925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.272950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.273064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.273089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.273188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.273214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.273354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.273384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.273491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.273516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.273653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.273678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.273783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.273808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.273967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.273992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.274124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.274149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.274284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.274321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.274486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.274511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.274641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.274666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.274802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.274828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.274945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.274970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.275077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.275102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.275263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.275289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.275397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.275423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.275526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.275554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.275691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.275717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.275824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.275850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.275956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.275982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.276118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.276144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.276261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.276312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.276445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.276474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.276577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.276603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.276737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.276763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.276898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.276926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.277085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.277110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.277263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.277290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.277448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.277473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.087 qpair failed and we were unable to recover it. 00:25:49.087 [2024-07-24 19:55:06.277579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.087 [2024-07-24 19:55:06.277604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.277734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.277759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.277922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.277947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.278102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.278127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.278234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.278265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.278413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.278438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.278541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.278570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.278733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.278759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.278858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.278884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.279021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.279046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.279178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.279203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.279384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.279410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.279517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.279542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.279670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.279695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.279795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.279821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.279962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.279987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.280106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.280145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.280296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.280336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.280454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.280482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.280648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.280674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.280788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.280814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.280912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.280937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.281059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.281087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.281196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.281225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.281376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.281403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.281521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.281548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.281681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.281706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.281809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.281834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.281937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.281962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.282122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.282150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.282263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.282290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.282418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.282443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.282543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.282568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.282673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.282703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.282828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.282853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.282963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.282991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.283134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.283173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.283306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.283345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.283463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.283490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.283630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.283656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.283822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.283848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.088 qpair failed and we were unable to recover it. 00:25:49.088 [2024-07-24 19:55:06.283988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.088 [2024-07-24 19:55:06.284015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.284161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.284200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.284358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.284386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.284503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.284528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.284688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.284713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.284845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.284870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.285010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.285035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.285171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.285196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.285340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.285366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.285476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.285504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.285660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.285685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.285841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.285866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.285979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.286005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.286146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.286172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.286328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.286366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.286507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.286534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.286665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.286690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.286856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.286882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.287016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.287041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.287150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.287176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.287271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.287300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.287411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.287436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.287549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.287574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.287709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.287736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.287842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.287867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.287970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.287995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.288108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.288134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.288254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.288279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.288411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.288438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.288566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.288591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.089 qpair failed and we were unable to recover it. 00:25:49.089 [2024-07-24 19:55:06.288699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.089 [2024-07-24 19:55:06.288724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.288842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.288867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.288999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.289031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.289136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.289162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.289306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.289332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.289475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.289500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.289606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.289631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.289763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.289789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.289892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.289919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.290031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.290056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.290166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.290191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.290305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.290330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.290447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.290472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.290612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.290637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.290744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.290770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.290871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.290897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.291044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.291083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.291216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.291266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.291388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.291415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.291522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.291547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.291702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.291728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.291826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.291852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.291981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.292006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.292163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.292189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.292298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.292324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.292432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.292457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.292616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.292642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.292745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.292770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.292902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.292927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.090 [2024-07-24 19:55:06.293072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.090 [2024-07-24 19:55:06.293099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.090 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.293213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.293240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.293367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.293393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.293500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.293526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.293656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.293681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.293813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.293838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.293973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.293998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.294107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.294132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.294267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.294305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.294412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.294439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.294597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.294623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.294779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.294805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.294943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.294970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.295129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.295160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.295296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.295322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.295472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.295497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.295605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.295631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.295763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.295788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.295924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.295950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.296114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.296140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.296277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.296306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.296449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.296474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.296584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.296609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.296769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.296794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.296899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.296924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.297053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.297078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.297214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.297239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.297414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.297439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.297571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.297596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.297697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.297722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.297861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.297887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.298019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.298046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.298184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.091 [2024-07-24 19:55:06.298210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.091 qpair failed and we were unable to recover it. 00:25:49.091 [2024-07-24 19:55:06.298360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.298399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.298520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.298559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.298696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.298723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.298831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.298857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.298964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.298990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.299117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.299143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.299277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.299308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.299451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.299480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.299588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.299614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.299716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.299741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.299860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.299886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.300015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.300040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.300154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.300181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.300305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.300332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.300442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.300468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.300598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.300623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.300751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.300776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.300914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.300939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.301067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.301092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.301223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.301255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.301368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.301395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.301530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.301555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.301660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.301686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.301815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.301840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.301970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.301996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.302136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.302175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.302286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.302314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.302446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.302473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.302601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.302626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.302725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.302750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.302862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.302887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.302992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.303018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.092 [2024-07-24 19:55:06.303155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.092 [2024-07-24 19:55:06.303180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.092 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.303367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.303406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.303520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.303546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.303679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.303705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.303832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.303857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.303987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.304014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.304171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.304210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.304365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.304392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.304508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.304533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.304635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.304660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.304792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.304818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.304923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.304948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.305109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.305134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.305258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.305297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.305413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.305440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.305573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.305608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.305709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.305734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.305874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.305899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.306031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.306057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.306163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.306189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.306323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.306350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.306454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.306480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.306641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.306667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.306797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.306823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.306950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.306975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.307111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.307136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.307235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.307267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.307425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.307450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.307587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.307614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.307752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.307779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.307884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.307909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.308014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.308040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.308149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.308175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.093 [2024-07-24 19:55:06.308282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.093 [2024-07-24 19:55:06.308310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.093 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.308459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.308485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.308620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.308647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.308793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.308818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.308921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.308946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.309059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.309087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.309251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.309277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.309381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.309407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.309573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.309599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.309735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.309761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.309893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.309920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.310056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.310082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.310181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.310207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.310350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.310383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.310510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.310548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.310686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.310713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.310845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.310870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.310974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.311000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.311142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.311181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.311327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.311355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.311488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.311514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.311648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.311673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.311810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.311841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.311973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.311999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.312156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.312182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.312316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.312341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.312459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.094 [2024-07-24 19:55:06.312488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.094 qpair failed and we were unable to recover it. 00:25:49.094 [2024-07-24 19:55:06.312600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.312628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.312730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.312755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.312862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.312887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.313016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.313042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.313175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.313200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.313373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.313399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.313506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.313533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.313639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.313664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.313800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.313826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.313933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.313958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.314060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.314086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.314216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.314246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.314376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.314401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.314535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.314560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.314695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.314722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.314860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.314885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.314993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.315018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.315120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.315146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.315284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.315310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.315412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.315438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.315546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.315571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.315670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.315697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.315831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.315856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.315982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.316008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.316138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.316163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.316310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.316336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.316483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.316521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.316664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.316691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.316820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.316846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.317006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.317032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.317160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.317186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.317336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.317375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.095 [2024-07-24 19:55:06.317492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.095 [2024-07-24 19:55:06.317519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.095 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.317635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.317661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.317794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.317819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.317960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.317985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.318098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.318123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.318250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.318290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.318409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.318436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.318563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.318588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.318721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.318746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.318910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.318938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.319042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.319067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.319178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.319204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.319320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.319347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.319457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.319483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.319640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.319665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.319823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.319848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.319977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.320002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.320122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.320160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.320338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.320377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.320560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.320598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.320738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.320765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.320902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.320928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.321043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.321070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.321188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.321227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.321379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.321406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.321518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.321543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.321697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.321723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.321833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.321858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.321964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.321989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.322098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.322127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.322235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.322267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.322383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.322409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.322547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.322573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.096 qpair failed and we were unable to recover it. 00:25:49.096 [2024-07-24 19:55:06.322710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.096 [2024-07-24 19:55:06.322736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.322874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.322900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.323002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.323029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.323173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.323212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.323332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.323360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.323470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.323495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.323595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.323620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.323728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.323753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.323879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.323904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.324010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.324036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.324141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.324166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.324307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.324336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.324469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.324507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.324649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.324676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.324807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.324833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.324970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.324996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.325114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.325152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.325272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.325300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.325437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.325464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.325594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.325620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.325726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.325752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.325856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.325882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.326018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.326043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.326174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.326200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.326300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.326332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.326472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.326497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.326634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.326659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.326763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.326788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.326918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.326944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.327076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.327102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.327235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.327265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.327367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.327393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.097 [2024-07-24 19:55:06.327525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.097 [2024-07-24 19:55:06.327554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.097 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.327709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.327734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.327838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.327865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.328000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.328026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.328149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.328174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.328331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.328357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.328470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.328498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.328634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.328660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.328804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.328831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.328948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.328974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.329077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.329103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.329229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.329260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.329397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.329423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.329522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.329547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.329706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.329732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.329869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.329897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.330034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.330059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.330165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.330190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.330326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.330351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.330490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.330515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.330618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.330643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.330747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.330772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.330904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.330930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.331065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.331092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.331189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.331214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.331347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.331385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.331525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.331552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.331683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.331709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.331811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.331836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.331941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.331966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.332102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.332127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.332233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.332270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.332391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.098 [2024-07-24 19:55:06.332429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.098 qpair failed and we were unable to recover it. 00:25:49.098 [2024-07-24 19:55:06.332564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.332590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.332695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.332721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.332813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.332839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.332944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.332969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.333103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.333131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.333261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.333299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.333435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.333462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.333587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.333613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.333742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.333767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.333869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.333895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.333997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.334022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.334132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.334159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.334274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.334299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.334464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.334489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.334614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.334639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.334740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.334765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.334868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.334893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.335061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.335087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.335192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.335219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.335331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.335358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.335522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.335548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.335666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.335692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.335801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.335826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.335944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.335971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.336104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.336129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.336238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.336271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.336426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.336457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.336565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.336591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.336735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.336760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.099 qpair failed and we were unable to recover it. 00:25:49.099 [2024-07-24 19:55:06.336893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.099 [2024-07-24 19:55:06.336917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.337049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.337074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.337252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.337279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.337410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.337437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.337598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.337624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.337754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.337779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.337880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.337905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.338034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.338059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.338221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.338255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.338365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.338390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.338505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.338530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.338641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.338667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.338772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.338797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.338928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.338954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.339082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.339107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.339229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.339277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.339421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.339447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.339594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.339619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.339752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.339777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.339877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.339902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.340037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.340062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.340190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.340216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.340344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.340383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.340522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.340549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.340696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.340724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.340872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.340897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.341011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.341036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.341149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.341174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.341278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.341304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.341437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.341462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.341584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.341609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.341715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.341740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.100 qpair failed and we were unable to recover it. 00:25:49.100 [2024-07-24 19:55:06.341845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.100 [2024-07-24 19:55:06.341870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.342005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.342030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.342137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.342162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.342265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.342290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.342391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.342416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.342542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.342567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.342670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.342695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.342795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.342822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.342928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.342953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.343074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.343113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.343250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.343277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.343405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.343432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.343561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.343586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.343716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.343742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.343860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.343887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.344016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.344042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.344170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.344196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.344310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.344337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.344473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.344500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.344648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.344686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.344798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.344825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.344939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.344965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.345095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.345120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.345273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.345298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.345433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.345458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.345556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.345581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.345714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.345740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.345846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.345870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.346003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.101 [2024-07-24 19:55:06.346029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.101 qpair failed and we were unable to recover it. 00:25:49.101 [2024-07-24 19:55:06.346130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.346156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.346289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.346327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.346438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.346467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.346584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.346610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.346742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.346768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.346904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.346930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.347060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.347085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.347255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.347282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.347392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.347417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.347553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.347579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.347686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.347713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.347842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.347867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.347997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.348023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.348156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.348181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.348323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.348362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.348497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.348523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.348661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.348687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.348808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.348834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.348931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.348956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.349061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.349086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.349184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.349211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.349351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.349377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.349482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.349507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.349666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.349691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.349825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.349850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.349964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.349990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.350120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.350158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.350323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.350352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.350455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.350481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.350581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.350606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.350762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.350787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.350922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.102 [2024-07-24 19:55:06.350948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.102 qpair failed and we were unable to recover it. 00:25:49.102 [2024-07-24 19:55:06.351112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.351139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.351279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.351305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.351410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.351435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.351538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.351562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.351694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.351718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.351846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.351870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.351973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.351998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.352112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.352150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.352280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.352319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.352467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.352494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.352639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.352665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.352797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.352823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.352934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.352961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.353076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.353103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.353247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.353276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.353425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.353463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.353626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.353653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.353813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.353839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.353944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.353969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.354110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.354137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.354275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.354303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.354468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.354493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.354621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.354646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.354799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.354824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.354925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.354949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.355055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.355080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.355187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.355212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.355340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.355366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.355470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.355495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.355624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.355649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.355754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.355781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.355938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.355963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.356092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.356118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.356253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.356279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.356389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.103 [2024-07-24 19:55:06.356414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.103 qpair failed and we were unable to recover it. 00:25:49.103 [2024-07-24 19:55:06.356544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.356569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.356679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.356705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.356862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.356887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.357017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.357042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.357150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.357180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.357341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.357367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.357503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.357528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.357660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.357685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.357843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.357868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.358007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.358031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.358188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.358213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.358318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.358343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.358448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.358473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.358630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.358655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.358773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.358798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.358932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.358957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.359066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.359092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.359230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.359262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.359378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.359403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.359511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.359536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.359662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.359687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.359814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.359839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.359997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.360022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.360132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.360158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.360291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.360317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.360422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.360447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.360575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.360600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.360703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.360727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.360841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.360866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.360975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.360999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.361101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.361127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.361266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.361296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.361404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.361429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.104 [2024-07-24 19:55:06.361534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.104 [2024-07-24 19:55:06.361559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.104 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.361697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.361722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.361853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.361879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.362005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.362030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.362134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.362160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.362320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.362345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.362480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.362505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.362607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.362633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.362732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.362757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.362864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.362889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.363000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.363025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.363136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.363161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.363308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.363348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.363498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.363525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.363623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.363649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.363808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.363834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.363971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.363997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.364107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.364133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.364248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.364274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.364376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.364402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.364516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.364543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.364678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.364705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.364804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.364829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.364964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.364989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.365119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.365144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.365254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.365284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.365423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.365448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.365585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.105 [2024-07-24 19:55:06.365610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.105 qpair failed and we were unable to recover it. 00:25:49.105 [2024-07-24 19:55:06.365723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.365748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.365905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.365930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.366034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.366059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.366157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.366182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.366282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.366308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.366446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.366471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.366577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.366604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.366736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.366762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.366860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.366885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.366987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.367012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.367148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.367173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.367288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.367314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.367426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.367452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.367582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.367607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.367763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.367788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.367915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.367940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.368101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.368126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.368228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.368259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.368362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.368387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.368480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.368505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.368661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.368686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.368784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.368809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.368942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.368967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.369067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.369093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.369251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.369291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.369442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.369470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.369602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.369628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.369732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.369758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.369886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.369912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.370040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.370065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.370183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.370222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.370382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.370420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.370537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.370563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.106 qpair failed and we were unable to recover it. 00:25:49.106 [2024-07-24 19:55:06.370665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.106 [2024-07-24 19:55:06.370690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.370794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.370820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.370941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.370968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.371072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.371098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.371209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.371234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.371369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.371396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.371503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.371529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.371677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.371703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.371812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.371838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.371962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.371987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.372093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.372119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.372238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.372285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.372448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.372475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.372576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.372601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.372736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.372761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.372891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.372917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.373016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.373041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.373142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.373167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.373301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.373333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.373461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.373486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.373617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.373642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.373753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.373778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.373884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.373909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.374044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.374069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.374206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.374230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.374398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.374423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.374528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.374553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.374716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.374741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.374863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.374889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.375017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.375042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.375144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.375168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.375296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.375322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.375428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.375454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.375564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.107 [2024-07-24 19:55:06.375589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.107 qpair failed and we were unable to recover it. 00:25:49.107 [2024-07-24 19:55:06.375715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.375740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.375899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.375924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.376026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.376051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.376183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.376208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.376323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.376349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.376480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.376506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.376615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.376641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.376761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.376788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.376919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.376945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.377057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.377089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.377195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.377220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.377354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.377393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.377508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.377535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.377641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.377667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.377776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.377801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.377936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.377963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.378122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.378148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.378249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.378276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.378409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.378434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.378562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.378587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.378695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.378721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.378823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.378849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.378960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.378985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.379115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.379140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.379249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.379275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.379392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.379431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.379571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.379598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.379731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.379757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.379860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.379885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.380017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.380042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.380175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.380201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.380351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.108 [2024-07-24 19:55:06.380378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.108 qpair failed and we were unable to recover it. 00:25:49.108 [2024-07-24 19:55:06.380538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.380564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.380696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.380721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.380821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.380847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.380956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.380984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.381123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.381149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.381260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.381287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.381428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.381461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.381597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.381623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.381755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.381781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.381940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.381966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.382095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.382121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.382232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.382266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.382379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.382406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.382512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.382537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.382671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.382697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.382857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.382883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.383041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.383067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.383172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.383198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.383320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.383346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.383457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.383483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.383600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.383627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.383735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.383761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.383897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.383923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.384080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.384106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.384237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.384275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.384437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.384463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.384566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.384592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.384694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.384720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.384877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.384902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.385044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.385072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.385223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.385279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.385394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.385420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.385530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.385556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.109 [2024-07-24 19:55:06.385691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.109 [2024-07-24 19:55:06.385717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.109 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.385848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.385874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.385982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.386008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.386117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.386143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.386257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.386284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.386410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.386436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.386576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.386602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.386717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.386744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.386909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.386935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.387071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.387097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.387206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.387233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.387381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.387418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.387574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.387613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.387752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.387785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.387920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.387946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.388054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.388079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.388216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.388249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.388367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.388393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.388521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.388547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.388678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.388703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.388812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.388837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.388953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.388980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.389119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.389144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.389249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.389275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.389410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.389437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.389579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.389605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.110 [2024-07-24 19:55:06.389738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.110 [2024-07-24 19:55:06.389763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.110 qpair failed and we were unable to recover it. 00:25:49.402 [2024-07-24 19:55:06.389871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.402 [2024-07-24 19:55:06.389897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.402 qpair failed and we were unable to recover it. 00:25:49.402 [2024-07-24 19:55:06.390055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.402 [2024-07-24 19:55:06.390081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.402 qpair failed and we were unable to recover it. 00:25:49.402 [2024-07-24 19:55:06.390187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.402 [2024-07-24 19:55:06.390212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.402 qpair failed and we were unable to recover it. 00:25:49.402 [2024-07-24 19:55:06.390340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.402 [2024-07-24 19:55:06.390366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.402 qpair failed and we were unable to recover it. 00:25:49.402 [2024-07-24 19:55:06.390499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.402 [2024-07-24 19:55:06.390524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.402 qpair failed and we were unable to recover it. 00:25:49.402 [2024-07-24 19:55:06.390625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.402 [2024-07-24 19:55:06.390650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.402 qpair failed and we were unable to recover it. 00:25:49.402 [2024-07-24 19:55:06.390749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.402 [2024-07-24 19:55:06.390774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.402 qpair failed and we were unable to recover it. 00:25:49.402 [2024-07-24 19:55:06.390878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.402 [2024-07-24 19:55:06.390903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.402 qpair failed and we were unable to recover it. 00:25:49.402 [2024-07-24 19:55:06.391004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.402 [2024-07-24 19:55:06.391029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.402 qpair failed and we were unable to recover it. 00:25:49.402 [2024-07-24 19:55:06.391146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.402 [2024-07-24 19:55:06.391172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.402 qpair failed and we were unable to recover it. 00:25:49.402 [2024-07-24 19:55:06.391272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.402 [2024-07-24 19:55:06.391297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.402 qpair failed and we were unable to recover it. 00:25:49.402 [2024-07-24 19:55:06.391403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.402 [2024-07-24 19:55:06.391429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.402 qpair failed and we were unable to recover it. 00:25:49.402 [2024-07-24 19:55:06.391554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.402 [2024-07-24 19:55:06.391579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.402 qpair failed and we were unable to recover it. 00:25:49.402 [2024-07-24 19:55:06.391728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.402 [2024-07-24 19:55:06.391767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.402 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.391875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.391901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.392012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.392038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.392159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.392184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.392314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.392341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.392474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.392499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.392602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.392629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.392726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.392751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.392859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.392884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.393041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.393067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.393197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.393222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.393346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.393373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.393475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.393500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.393600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.393631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.393741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.393766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.393866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.393892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.393994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.403 [2024-07-24 19:55:06.394020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.403 qpair failed and we were unable to recover it. 00:25:49.403 [2024-07-24 19:55:06.394145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.394170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.394285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.394310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.394412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.394437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.394570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.394596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.394700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.394727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.394858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.394883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.394979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.395004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.395135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.395161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.395268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.395298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.395438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.395463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.395608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.395633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.395763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.395788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.395890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.395914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.396045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.396070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.396167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.396193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.396322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.396361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.396469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.396496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.396597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.396624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.396761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.396786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.404 qpair failed and we were unable to recover it. 00:25:49.404 [2024-07-24 19:55:06.396896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.404 [2024-07-24 19:55:06.396922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.397050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.405 [2024-07-24 19:55:06.397076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.397208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.405 [2024-07-24 19:55:06.397235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.397403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.405 [2024-07-24 19:55:06.397429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.397573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.405 [2024-07-24 19:55:06.397600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.397713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.405 [2024-07-24 19:55:06.397738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.397847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.405 [2024-07-24 19:55:06.397872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.398000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.405 [2024-07-24 19:55:06.398025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.398127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.405 [2024-07-24 19:55:06.398154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.398285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.405 [2024-07-24 19:55:06.398311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.398416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.405 [2024-07-24 19:55:06.398442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.398551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.405 [2024-07-24 19:55:06.398578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.398684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.405 [2024-07-24 19:55:06.398710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.398814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.405 [2024-07-24 19:55:06.398840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.398942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.405 [2024-07-24 19:55:06.398967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.399068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.405 [2024-07-24 19:55:06.399093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.405 qpair failed and we were unable to recover it. 00:25:49.405 [2024-07-24 19:55:06.399207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.399231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.399371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.399401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.399503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.399528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.399636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.399661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.399755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.399780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.399904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.399929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.400062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.400087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.400191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.400216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.400325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.400352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.400453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.400478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.400644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.400669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.400768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.400793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.400926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.400951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.401064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.401089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.401193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.401218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.401362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.401387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.401490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.401515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.401613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.401638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.406 qpair failed and we were unable to recover it. 00:25:49.406 [2024-07-24 19:55:06.401778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.406 [2024-07-24 19:55:06.401803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.401966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.401991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.402094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.402119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.402254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.402280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.402378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.402403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.402510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.402535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.402637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.402662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.402820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.402845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.402953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.402978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.403137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.403162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.403277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.403302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.403406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.403432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.403532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.403558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.403693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.403718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.403873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.403898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.404030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.404055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.404160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.404186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.404299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.404324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.407 qpair failed and we were unable to recover it. 00:25:49.407 [2024-07-24 19:55:06.404454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.407 [2024-07-24 19:55:06.404480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.404612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.404637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.404793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.404818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.404954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.404979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.405109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.405135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.405271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.405304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.405416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.405442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.405548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.405574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.405679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.405705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.405844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.405869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.405979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.406004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.406140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.406165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.406294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.406319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.406447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.406472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.406572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.406596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.406728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.406754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.408 qpair failed and we were unable to recover it. 00:25:49.408 [2024-07-24 19:55:06.406911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.408 [2024-07-24 19:55:06.406936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.407043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.407069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.407196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.407221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.407363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.407388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.407517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.407542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.407652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.407677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.407790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.407814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.407942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.407968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.408128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.408154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.408266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.408293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.408389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.408414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.408520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.408545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.408705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.408730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.408887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.408912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.409044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.409070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.409201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.409227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.409336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.409366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.409 [2024-07-24 19:55:06.409500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.409 [2024-07-24 19:55:06.409525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.409 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.409681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.409706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.409809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.409834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.409964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.409989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.410122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.410147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.410253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.410279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.410436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.410461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.410592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.410617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.410782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.410808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.410938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.410965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.411099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.411124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.411252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.411278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.411414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.411439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.411579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.411605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.411731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.411755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.411881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.411906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.412065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.412090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.412219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.412249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.412350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.412375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.412511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.412537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.410 [2024-07-24 19:55:06.412670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.410 [2024-07-24 19:55:06.412695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.410 qpair failed and we were unable to recover it. 00:25:49.411 [2024-07-24 19:55:06.412831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.411 [2024-07-24 19:55:06.412857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.411 qpair failed and we were unable to recover it. 00:25:49.411 [2024-07-24 19:55:06.412963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.411 [2024-07-24 19:55:06.412988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.411 qpair failed and we were unable to recover it. 00:25:49.411 [2024-07-24 19:55:06.413114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.411 [2024-07-24 19:55:06.413140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.411 qpair failed and we were unable to recover it. 00:25:49.411 [2024-07-24 19:55:06.413260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.411 [2024-07-24 19:55:06.413285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.411 qpair failed and we were unable to recover it. 00:25:49.411 [2024-07-24 19:55:06.413412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.411 [2024-07-24 19:55:06.413437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.411 qpair failed and we were unable to recover it. 00:25:49.411 [2024-07-24 19:55:06.413541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.411 [2024-07-24 19:55:06.413567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.411 qpair failed and we were unable to recover it. 00:25:49.411 [2024-07-24 19:55:06.413683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.411 [2024-07-24 19:55:06.413708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.411 qpair failed and we were unable to recover it. 00:25:49.411 [2024-07-24 19:55:06.413845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.411 [2024-07-24 19:55:06.413870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.411 qpair failed and we were unable to recover it. 00:25:49.411 [2024-07-24 19:55:06.414001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.411 [2024-07-24 19:55:06.414026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.411 qpair failed and we were unable to recover it. 00:25:49.411 [2024-07-24 19:55:06.414151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.411 [2024-07-24 19:55:06.414176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.411 qpair failed and we were unable to recover it. 00:25:49.411 [2024-07-24 19:55:06.414277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.411 [2024-07-24 19:55:06.414303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.411 qpair failed and we were unable to recover it. 00:25:49.411 [2024-07-24 19:55:06.414414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.411 [2024-07-24 19:55:06.414439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.411 qpair failed and we were unable to recover it. 00:25:49.411 [2024-07-24 19:55:06.414574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.411 [2024-07-24 19:55:06.414599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.411 qpair failed and we were unable to recover it. 00:25:49.411 [2024-07-24 19:55:06.414729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.411 [2024-07-24 19:55:06.414754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.411 qpair failed and we were unable to recover it. 00:25:49.412 [2024-07-24 19:55:06.414858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.412 [2024-07-24 19:55:06.414882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.412 qpair failed and we were unable to recover it. 00:25:49.412 [2024-07-24 19:55:06.414982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.412 [2024-07-24 19:55:06.415008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.412 qpair failed and we were unable to recover it. 00:25:49.412 [2024-07-24 19:55:06.415120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.412 [2024-07-24 19:55:06.415145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.412 qpair failed and we were unable to recover it. 00:25:49.412 [2024-07-24 19:55:06.415305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.412 [2024-07-24 19:55:06.415331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.412 qpair failed and we were unable to recover it. 00:25:49.412 [2024-07-24 19:55:06.415438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.412 [2024-07-24 19:55:06.415468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.412 qpair failed and we were unable to recover it. 00:25:49.412 [2024-07-24 19:55:06.415601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.412 [2024-07-24 19:55:06.415626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.412 qpair failed and we were unable to recover it. 00:25:49.412 [2024-07-24 19:55:06.415756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.412 [2024-07-24 19:55:06.415781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.412 qpair failed and we were unable to recover it. 00:25:49.412 [2024-07-24 19:55:06.415919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.412 [2024-07-24 19:55:06.415944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.412 qpair failed and we were unable to recover it. 00:25:49.412 [2024-07-24 19:55:06.416072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.412 [2024-07-24 19:55:06.416097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.412 qpair failed and we were unable to recover it. 00:25:49.412 [2024-07-24 19:55:06.416211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.412 [2024-07-24 19:55:06.416236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.412 qpair failed and we were unable to recover it. 00:25:49.412 [2024-07-24 19:55:06.416381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.416407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.413 qpair failed and we were unable to recover it. 00:25:49.413 [2024-07-24 19:55:06.416513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.416538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.413 qpair failed and we were unable to recover it. 00:25:49.413 [2024-07-24 19:55:06.416695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.416720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.413 qpair failed and we were unable to recover it. 00:25:49.413 [2024-07-24 19:55:06.416844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.416869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.413 qpair failed and we were unable to recover it. 00:25:49.413 [2024-07-24 19:55:06.416969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.416995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.413 qpair failed and we were unable to recover it. 00:25:49.413 [2024-07-24 19:55:06.417126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.417151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.413 qpair failed and we were unable to recover it. 00:25:49.413 [2024-07-24 19:55:06.417283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.417308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.413 qpair failed and we were unable to recover it. 00:25:49.413 [2024-07-24 19:55:06.417447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.417473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.413 qpair failed and we were unable to recover it. 00:25:49.413 [2024-07-24 19:55:06.417580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.417605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.413 qpair failed and we were unable to recover it. 00:25:49.413 [2024-07-24 19:55:06.417709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.417735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.413 qpair failed and we were unable to recover it. 00:25:49.413 [2024-07-24 19:55:06.417869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.417895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.413 qpair failed and we were unable to recover it. 00:25:49.413 [2024-07-24 19:55:06.418025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.418050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.413 qpair failed and we were unable to recover it. 00:25:49.413 [2024-07-24 19:55:06.418159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.418184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.413 qpair failed and we were unable to recover it. 00:25:49.413 [2024-07-24 19:55:06.418292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.418317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.413 qpair failed and we were unable to recover it. 00:25:49.413 [2024-07-24 19:55:06.418431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.418457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.413 qpair failed and we were unable to recover it. 00:25:49.413 [2024-07-24 19:55:06.418561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.413 [2024-07-24 19:55:06.418586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.418700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.418724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.418835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.418861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.418992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.419018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.419159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.419185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.419313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.419338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.419448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.419474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.419609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.419635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.419746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.419772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.419885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.419910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.420038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.420064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.420172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.420197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.420368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.420394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.420492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.420517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.420648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.420674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.420799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.420824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.414 qpair failed and we were unable to recover it. 00:25:49.414 [2024-07-24 19:55:06.420932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.414 [2024-07-24 19:55:06.420958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.421062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.421087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.421213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.421240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.421390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.421421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.421576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.421602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.421728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.421753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.421888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.421913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.422028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.422068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.422233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.422267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.422382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.422408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.422542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.422567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.422724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.422749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.422878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.422903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.423034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.423059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.423184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.423209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.423343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.423370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.423469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.423494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.423640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.423666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.415 [2024-07-24 19:55:06.423792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.415 [2024-07-24 19:55:06.423817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.415 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.423941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.423966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.424070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.424095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.424252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.424278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.424390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.424415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.424522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.424548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.424656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.424681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.424784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.424809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.424967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.424993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.425126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.425151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.425287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.425314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.425446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.425473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.425611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.425636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.425739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.425764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.425866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.425891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.426024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.426050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.426214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.416 [2024-07-24 19:55:06.426240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.416 qpair failed and we were unable to recover it. 00:25:49.416 [2024-07-24 19:55:06.426354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.426381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.426521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.426548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.426651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.426678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.426789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.426816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.426942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.426967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.427073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.427100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.427234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.427266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.427378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.427404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.427565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.427595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.427695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.427721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.427855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.427881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.427984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.428009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.428117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.428143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.428304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.428331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.428460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.428486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.428628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.428653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.428780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.428806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.428906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.428930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.429087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.429113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.429273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.429299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.429437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.429464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.429625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.429650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.429758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.429783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.429943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.417 [2024-07-24 19:55:06.429968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.417 qpair failed and we were unable to recover it. 00:25:49.417 [2024-07-24 19:55:06.430102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.430127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.430234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.430266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.430396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.430422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.430565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.430591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.430724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.430749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.430882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.430908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.431010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.431036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.431141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.431165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.431272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.431298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.431394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.431420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.431557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.431583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.431695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.431721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.431859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.431883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.431988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.432014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.432112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.432138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.432252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.432278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.432416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.432441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.432584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.432610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.432741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.432766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.432918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.432943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.433045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.433071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.433198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.433224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.433384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.433423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.433536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.418 [2024-07-24 19:55:06.433562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.418 qpair failed and we were unable to recover it. 00:25:49.418 [2024-07-24 19:55:06.433671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.433704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.433839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.433865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.433981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.434007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.434114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.434140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.434308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.434335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.434474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.434500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.434635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.434660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.434765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.434790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.434903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.434928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.435066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.435091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.435208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.435235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.435388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.435413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.435533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.435558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.435690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.435715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.435827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.435853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.435987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.436012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.436141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.436166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.436277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.436304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.436431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.436456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.436669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.436694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.436827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.436852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.436954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.436981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.437112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.437138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.437293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.437319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.437424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.437449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.437584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.437610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.437708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.437734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.437895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.437925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.438050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.438076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.438175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.419 [2024-07-24 19:55:06.438200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.419 qpair failed and we were unable to recover it. 00:25:49.419 [2024-07-24 19:55:06.438340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.438367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.438528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.438553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.438689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.438714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.438807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.438833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.438961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.438986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.439088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.439114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.439239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.439269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.439378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.439403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.439545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.439574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.439676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.439702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.439827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.439853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.439992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.440017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.440154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.440179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.440287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.440313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.440428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.440453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.440561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.440586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.440714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.440739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.440866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.440892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.441021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.441046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.420 [2024-07-24 19:55:06.441146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.420 [2024-07-24 19:55:06.441172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.420 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.441312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.441339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.441459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.441485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.441618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.441643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.441771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.441796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.441937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.441964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.442122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.442148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.442262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.442288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.442421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.442446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.442558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.442583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.442740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.442765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.442902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.442927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.443034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.443059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.443164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.443191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.443295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.443321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.443428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.443454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.443593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.443619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.443764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.443789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.443903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.443929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.444040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.444066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.444196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.444221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.444353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.444380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.444516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.444542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.444650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.444675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.444839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.444865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.445000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.445026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.445136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.445161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.445298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.445324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.445428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.445454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.445592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.445617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.445775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.445800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.445936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.445961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.421 [2024-07-24 19:55:06.446064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.421 [2024-07-24 19:55:06.446096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.421 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.446202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.446230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.446409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.446435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.446540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.446566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.446676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.446702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.446917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.446943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.447051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.447076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.447209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.447235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.447377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.447402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.447542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.447568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.447727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.447753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.447857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.447884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.448014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.448040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.448186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.448213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.448332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.448358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.448469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.448494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.448611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.448637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.448769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.448794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.448894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.448919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.449020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.449045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.449172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.449197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.449344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.449371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.449512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.449537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.449667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.449692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.449832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.449857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.450015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.450041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.450196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.450221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.450369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.450395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.450521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.450546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.450677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.450703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.450830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.450856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.450985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.451011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.422 qpair failed and we were unable to recover it. 00:25:49.422 [2024-07-24 19:55:06.451149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.422 [2024-07-24 19:55:06.451175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.451315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.451343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.451448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.451473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.451597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.451623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.451749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.451774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.451911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.451936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.452049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.452078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.452211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.452237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.452388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.452414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.452529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.452555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.452690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.452715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.452824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.452849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.452974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.452999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.453103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.453129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.453233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.453264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.453371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.453396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.453499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.453524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.453634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.453659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.453772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.453797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.453928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.453953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.454083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.454109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.454239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.454269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.454399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.454425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.454582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.454607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.454717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.454741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.454847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.454873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.454985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.455011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.455143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.455169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.455306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.455331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.455436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.455463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.455595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.455620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.455775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.455800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.423 [2024-07-24 19:55:06.455903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.423 [2024-07-24 19:55:06.455928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.423 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.456058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.456083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.456216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.456246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.456358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.456383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.456516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.456541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.456683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.456708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.456840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.456865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.456989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.457014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.457144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.457169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.457300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.457325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.457452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.457478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.457612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.457637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.457771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.457796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.457902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.457927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.458025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.458050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.458157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.458181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.458344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.458370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.458497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.458527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.458631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.458656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.458778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.458803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.458901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.458926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.459059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.459083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.459218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.459249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.459351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.459376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.459509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.459534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.459668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.459693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.459829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.459853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.459952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.459977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.460090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.460115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.460227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.460258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.460369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.460395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.424 qpair failed and we were unable to recover it. 00:25:49.424 [2024-07-24 19:55:06.460504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.424 [2024-07-24 19:55:06.460530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.460658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.460683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.460816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.460841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.460945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.460970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.461127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.461151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.461257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.461284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.461395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.461420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.461550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.461575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.461672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.461697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.461828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.461853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.461961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.461986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.462085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.462111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.462238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.462268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.462371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.462396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.462526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.462552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.462655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.462680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.462772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.462797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.462940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.462965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.463124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.463149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.463282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.463308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.463407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.463432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.463543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.463569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.463677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.463703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.463838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.463863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.463970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.463996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.464141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.464167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.464309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.464335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.464445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.464471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.464578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.464603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.464709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.464735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.464849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.425 [2024-07-24 19:55:06.464874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.425 qpair failed and we were unable to recover it. 00:25:49.425 [2024-07-24 19:55:06.464973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.465000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.465119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.465144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.465277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.465303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.465411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.465437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.465546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.465571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.465701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.465726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.465857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.465882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.466019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.466044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.466177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.466202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.466342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.466368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.466506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.466531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.466632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.466657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.466759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.466784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.466898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.466923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.467029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.467054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.467219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.467250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.467382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.467408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.467534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.467560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.467685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.467710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.467842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.467867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.467994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.426 [2024-07-24 19:55:06.468019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.426 qpair failed and we were unable to recover it. 00:25:49.426 [2024-07-24 19:55:06.468115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.468140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.468252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.468278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.468378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.468407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.468536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.468561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.468667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.468692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.468824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.468848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.468962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.468987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.469096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.469121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.469253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.469279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.469382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.469407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.469517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.469543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.469642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.469667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.469792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.469817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.469931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.469957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.470091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.470116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.470225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.470283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.470407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.470433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.470558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.470583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.470692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.470717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.470872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.470897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.470998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.471025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.471139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.471164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.471299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.471325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.471486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.471512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.471619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.471644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.471757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.471782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.471913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.471938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.472072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.472098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.472228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.472259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.472360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.472385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.472493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.472519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.427 [2024-07-24 19:55:06.472633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.427 [2024-07-24 19:55:06.472658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.427 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.472763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.472788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.472916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.472941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.473039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.473064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.473171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.473197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.473328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.473354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.473457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.473482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.473596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.473621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.473746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.473771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.473905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.473930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.474056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.474081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.474210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.474235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.474380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.474409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.474511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.474536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.474649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.474674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.474801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.474826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.474966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.474991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.475125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.475150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.475309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.475334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.475468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.475493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.475600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.475625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.475755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.475781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.475920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.475945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.476072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.476097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.476220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.476251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.476357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.476382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.476485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.476510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.476642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.476667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.476804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.476829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.476936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.476960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.477072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.477098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.477236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.477267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.477379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.477405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.477511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.477536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.428 qpair failed and we were unable to recover it. 00:25:49.428 [2024-07-24 19:55:06.477637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.428 [2024-07-24 19:55:06.477662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.477802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.477827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.477933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.477958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.478055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.478081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.478219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.478259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.478367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.478399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.478517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.478542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.478649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.478674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.478820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.478845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.479004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.479029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.479154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.479179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.479339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.479365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.479461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.479486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.479610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.479635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.479789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.479815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.479946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.479971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.480100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.480125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.480217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.480249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.480380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.480406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.480527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.480566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.480707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.480734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.480834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.480860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.480974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.480999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.481105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.481130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.481268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.481294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.481464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.481489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.481598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.481623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.481751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.481776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.481905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.481930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.482032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.482056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.482160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.482185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.482289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.482315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.482447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.482477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.482586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.482612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.482712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.482739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.482880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.482905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.483032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.483057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.483187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.483213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.483332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.483358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.429 [2024-07-24 19:55:06.483474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.429 [2024-07-24 19:55:06.483501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.429 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.483607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.483632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.483736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.483762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.483870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.483896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.484012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.484038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.484141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.484166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.484278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.484304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.484416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.484441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.484588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.484613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.484750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.484775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.484914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.484939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.485072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.485098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.485252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.485279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.485415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.485440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.485571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.485596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.485726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.485751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.485906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.485931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.486037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.486063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.486201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.486226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.486334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.486359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.486488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.486527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.486664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.486691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.486851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.486876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.487008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.487033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.487171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.487196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.487339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.487365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.487467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.487492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.487600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.487625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.487763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.487788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.487919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.487944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.488079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.488104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.488247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.488273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.430 qpair failed and we were unable to recover it. 00:25:49.430 [2024-07-24 19:55:06.488432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.430 [2024-07-24 19:55:06.488457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.488554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.488579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.488716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.488741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.488849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.488876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.488986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.489012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.489145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.489170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.489282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.489308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.489420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.489445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.489601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.489626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.489735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.489759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.489887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.489912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.490011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.490035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.490155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.490195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.490369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.490396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.490501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.490528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.490699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.490730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.490869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.490894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.491002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.491028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.491130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.491155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.491262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.491289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.491402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.491427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.491541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.491566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.491691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.491716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.491827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.431 [2024-07-24 19:55:06.491852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.431 qpair failed and we were unable to recover it. 00:25:49.431 [2024-07-24 19:55:06.491955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.491980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.492089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.492114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.492229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.492260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.492370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.492396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.492496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.492522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.492659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.492684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.492842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.492867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.493003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.493028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.493163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.493188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.493308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.493334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.493435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.493461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.493576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.493601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.493716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.493742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.493879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.493904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.494061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.494086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.494188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.494213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.494327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.494352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.494483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.494508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.494642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.494667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.494791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.494816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.494973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.494998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.495130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.495155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.495288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.495314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.495428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.495455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.495562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.495587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.495688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.495713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.495839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.432 [2024-07-24 19:55:06.495865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.432 qpair failed and we were unable to recover it. 00:25:49.432 [2024-07-24 19:55:06.495965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.495990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.496088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.496113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.496223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.496253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.496355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.496380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.496491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.496522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.496669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.496694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.496824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.496848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.496976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.497001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.497130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.497154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.497260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.497286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.497397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.497422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.497551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.497576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.497707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.497733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.497891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.497917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.498022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.498047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.498149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.498174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.498311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.498337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.498462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.498487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.498624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.498649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.433 qpair failed and we were unable to recover it. 00:25:49.433 [2024-07-24 19:55:06.498762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.433 [2024-07-24 19:55:06.498787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.498913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.498938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.499074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.499100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.499211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.499238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.499382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.499407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.499544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.499569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.499704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.499731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.499835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.499860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.499961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.499986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.500125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.500151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.500294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.500320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.500477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.500502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.500670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.500695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.500809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.500834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.500967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.500993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.501129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.501154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.501262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.501287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.434 qpair failed and we were unable to recover it. 00:25:49.434 [2024-07-24 19:55:06.501423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.434 [2024-07-24 19:55:06.501448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.501582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.501606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.501706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.501730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.501832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.501857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.501990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.502014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.502116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.502141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.502288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.502314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.502448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.502472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.502572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.502601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.502760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.502786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.502942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.502967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.503070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.503095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.503226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.503256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.503385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.503410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.503513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.503538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.503670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.503695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.503828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.503853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.503991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.504016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.435 qpair failed and we were unable to recover it. 00:25:49.435 [2024-07-24 19:55:06.504110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.435 [2024-07-24 19:55:06.504135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.504262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.504288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.504429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.504454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.504554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.504579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.504715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.504740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.504844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.504869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.504973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.504998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.505109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.505134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.505245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.505271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.505405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.505430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.505575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.505600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.505738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.505763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.505871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.505897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.506034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.506060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.506168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.506193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.506333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.506358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.436 [2024-07-24 19:55:06.506491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.436 [2024-07-24 19:55:06.506516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.436 qpair failed and we were unable to recover it. 00:25:49.437 [2024-07-24 19:55:06.506651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.437 [2024-07-24 19:55:06.506676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.437 qpair failed and we were unable to recover it. 00:25:49.437 [2024-07-24 19:55:06.506777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.437 [2024-07-24 19:55:06.506803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.437 qpair failed and we were unable to recover it. 00:25:49.437 [2024-07-24 19:55:06.506933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.437 [2024-07-24 19:55:06.506958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.437 qpair failed and we were unable to recover it. 00:25:49.437 [2024-07-24 19:55:06.507066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.437 [2024-07-24 19:55:06.507091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.437 qpair failed and we were unable to recover it. 00:25:49.437 [2024-07-24 19:55:06.507227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.437 [2024-07-24 19:55:06.507260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.437 qpair failed and we were unable to recover it. 00:25:49.437 [2024-07-24 19:55:06.507376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.437 [2024-07-24 19:55:06.507401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.437 qpair failed and we were unable to recover it. 00:25:49.437 [2024-07-24 19:55:06.507506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.437 [2024-07-24 19:55:06.507531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.437 qpair failed and we were unable to recover it. 00:25:49.437 [2024-07-24 19:55:06.507627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.437 [2024-07-24 19:55:06.507652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.437 qpair failed and we were unable to recover it. 00:25:49.437 [2024-07-24 19:55:06.507764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.437 [2024-07-24 19:55:06.507789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.437 qpair failed and we were unable to recover it. 00:25:49.437 [2024-07-24 19:55:06.507891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.437 [2024-07-24 19:55:06.507915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.437 qpair failed and we were unable to recover it. 00:25:49.437 [2024-07-24 19:55:06.508040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.437 [2024-07-24 19:55:06.508065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.437 qpair failed and we were unable to recover it. 00:25:49.437 [2024-07-24 19:55:06.508193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.437 [2024-07-24 19:55:06.508220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.437 qpair failed and we were unable to recover it. 00:25:49.438 [2024-07-24 19:55:06.508359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.438 [2024-07-24 19:55:06.508385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.438 qpair failed and we were unable to recover it. 00:25:49.438 [2024-07-24 19:55:06.508493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.438 [2024-07-24 19:55:06.508522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.438 qpair failed and we were unable to recover it. 00:25:49.438 [2024-07-24 19:55:06.508652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.438 [2024-07-24 19:55:06.508677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.438 qpair failed and we were unable to recover it. 00:25:49.438 [2024-07-24 19:55:06.508781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.438 [2024-07-24 19:55:06.508807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.438 qpair failed and we were unable to recover it. 00:25:49.438 [2024-07-24 19:55:06.508905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.438 [2024-07-24 19:55:06.508930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.438 qpair failed and we were unable to recover it. 00:25:49.438 [2024-07-24 19:55:06.509037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.438 [2024-07-24 19:55:06.509062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.438 qpair failed and we were unable to recover it. 00:25:49.438 [2024-07-24 19:55:06.509199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.438 [2024-07-24 19:55:06.509224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.438 qpair failed and we were unable to recover it. 00:25:49.438 [2024-07-24 19:55:06.509337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.438 [2024-07-24 19:55:06.509362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.438 qpair failed and we were unable to recover it. 00:25:49.438 [2024-07-24 19:55:06.509461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.438 [2024-07-24 19:55:06.509486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.438 qpair failed and we were unable to recover it. 00:25:49.438 [2024-07-24 19:55:06.509618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.438 [2024-07-24 19:55:06.509645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.438 qpair failed and we were unable to recover it. 00:25:49.438 [2024-07-24 19:55:06.509776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.438 [2024-07-24 19:55:06.509801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.438 qpair failed and we were unable to recover it. 00:25:49.438 [2024-07-24 19:55:06.509934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.438 [2024-07-24 19:55:06.509961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.438 qpair failed and we were unable to recover it. 00:25:49.438 [2024-07-24 19:55:06.510061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.438 [2024-07-24 19:55:06.510087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.438 qpair failed and we were unable to recover it. 00:25:49.438 [2024-07-24 19:55:06.510204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.438 [2024-07-24 19:55:06.510229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.438 qpair failed and we were unable to recover it. 00:25:49.438 [2024-07-24 19:55:06.510379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.438 [2024-07-24 19:55:06.510404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.439 [2024-07-24 19:55:06.510541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.439 [2024-07-24 19:55:06.510566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.439 [2024-07-24 19:55:06.510700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.439 [2024-07-24 19:55:06.510725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.439 [2024-07-24 19:55:06.510837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.439 [2024-07-24 19:55:06.510863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.439 [2024-07-24 19:55:06.510980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.439 [2024-07-24 19:55:06.511005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.439 [2024-07-24 19:55:06.511111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.439 [2024-07-24 19:55:06.511137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.439 [2024-07-24 19:55:06.511273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.439 [2024-07-24 19:55:06.511300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.439 [2024-07-24 19:55:06.511408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.439 [2024-07-24 19:55:06.511434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.439 [2024-07-24 19:55:06.511594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.439 [2024-07-24 19:55:06.511619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.439 [2024-07-24 19:55:06.511728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.439 [2024-07-24 19:55:06.511753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.439 [2024-07-24 19:55:06.511879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.439 [2024-07-24 19:55:06.511904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.439 [2024-07-24 19:55:06.512014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.439 [2024-07-24 19:55:06.512040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.439 [2024-07-24 19:55:06.512142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.439 [2024-07-24 19:55:06.512168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.439 [2024-07-24 19:55:06.512311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.439 [2024-07-24 19:55:06.512337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.439 [2024-07-24 19:55:06.512481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.439 [2024-07-24 19:55:06.512507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.439 [2024-07-24 19:55:06.512614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.439 [2024-07-24 19:55:06.512640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.439 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.512781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.512806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.512925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.512950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.513088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.513113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.513218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.513248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.513365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.513390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.513488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.513514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.513644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.513669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.513825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.513850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.514010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.514035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.514199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.514224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.514354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.514381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.514487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.514516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.514619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.514645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.514779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.514804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.514962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.514988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.515099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.515125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.440 [2024-07-24 19:55:06.515270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.440 [2024-07-24 19:55:06.515296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.440 qpair failed and we were unable to recover it. 00:25:49.441 [2024-07-24 19:55:06.515430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.441 [2024-07-24 19:55:06.515455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.441 qpair failed and we were unable to recover it. 00:25:49.441 [2024-07-24 19:55:06.515592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.441 [2024-07-24 19:55:06.515619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.441 qpair failed and we were unable to recover it. 00:25:49.441 [2024-07-24 19:55:06.515758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.441 [2024-07-24 19:55:06.515783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.441 qpair failed and we were unable to recover it. 00:25:49.441 [2024-07-24 19:55:06.515916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.441 [2024-07-24 19:55:06.515942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.441 qpair failed and we were unable to recover it. 00:25:49.441 [2024-07-24 19:55:06.516071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.441 [2024-07-24 19:55:06.516096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.441 qpair failed and we were unable to recover it. 00:25:49.441 [2024-07-24 19:55:06.516229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.441 [2024-07-24 19:55:06.516261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.441 qpair failed and we were unable to recover it. 00:25:49.441 [2024-07-24 19:55:06.516394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.441 [2024-07-24 19:55:06.516419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.441 qpair failed and we were unable to recover it. 00:25:49.441 [2024-07-24 19:55:06.516555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.441 [2024-07-24 19:55:06.516581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.441 qpair failed and we were unable to recover it. 00:25:49.441 [2024-07-24 19:55:06.516692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.441 [2024-07-24 19:55:06.516719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.441 qpair failed and we were unable to recover it. 00:25:49.441 [2024-07-24 19:55:06.516827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.441 [2024-07-24 19:55:06.516853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.441 qpair failed and we were unable to recover it. 00:25:49.441 [2024-07-24 19:55:06.516959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.516984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.442 qpair failed and we were unable to recover it. 00:25:49.442 [2024-07-24 19:55:06.517122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.517147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.442 qpair failed and we were unable to recover it. 00:25:49.442 [2024-07-24 19:55:06.517252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.517279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.442 qpair failed and we were unable to recover it. 00:25:49.442 [2024-07-24 19:55:06.517389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.517415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.442 qpair failed and we were unable to recover it. 00:25:49.442 [2024-07-24 19:55:06.517574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.517599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.442 qpair failed and we were unable to recover it. 00:25:49.442 [2024-07-24 19:55:06.517733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.517758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.442 qpair failed and we were unable to recover it. 00:25:49.442 [2024-07-24 19:55:06.517917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.517943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.442 qpair failed and we were unable to recover it. 00:25:49.442 [2024-07-24 19:55:06.518070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.518095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.442 qpair failed and we were unable to recover it. 00:25:49.442 [2024-07-24 19:55:06.518263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.518289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.442 qpair failed and we were unable to recover it. 00:25:49.442 [2024-07-24 19:55:06.518394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.518421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.442 qpair failed and we were unable to recover it. 00:25:49.442 [2024-07-24 19:55:06.518524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.518549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.442 qpair failed and we were unable to recover it. 00:25:49.442 [2024-07-24 19:55:06.518661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.518686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.442 qpair failed and we were unable to recover it. 00:25:49.442 [2024-07-24 19:55:06.518844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.518870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.442 qpair failed and we were unable to recover it. 00:25:49.442 [2024-07-24 19:55:06.519011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.519036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.442 qpair failed and we were unable to recover it. 00:25:49.442 [2024-07-24 19:55:06.519170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.519195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.442 qpair failed and we were unable to recover it. 00:25:49.442 [2024-07-24 19:55:06.519323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.442 [2024-07-24 19:55:06.519348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.519453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.519478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.519644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.519670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.519804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.519829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.519964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.519989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.520100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.520126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.520225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.520256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.520397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.520423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.520524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.520549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.520653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.520682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.520815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.520840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.520972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.520997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.521126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.521151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.521258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.521284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.521412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.521437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.521541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.521566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.521692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.521718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.521848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.443 [2024-07-24 19:55:06.521872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.443 qpair failed and we were unable to recover it. 00:25:49.443 [2024-07-24 19:55:06.522017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.522042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.522168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.522193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.522295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.522320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.522424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.522449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.522591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.522616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.522759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.522784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.522940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.522965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.523070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.523095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.523226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.523258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.523394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.523420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.523556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.523582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.523710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.523735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.523861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.523886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.523993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.524019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.524157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.524182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.524289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.524315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.524447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.524472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.524575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.524600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.524736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.524761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.524867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.444 [2024-07-24 19:55:06.524892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.444 qpair failed and we were unable to recover it. 00:25:49.444 [2024-07-24 19:55:06.524996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.525021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.525153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.525178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.525285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.525310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.525410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.525436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.525536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.525561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.525659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.525684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.525787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.525812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.525926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.525951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.526061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.526086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.526226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.526257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.526390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.526414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.526520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.526552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.526661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.526687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.526804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.526829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.526959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.526984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.527091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.527116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.527257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.445 [2024-07-24 19:55:06.527284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.445 qpair failed and we were unable to recover it. 00:25:49.445 [2024-07-24 19:55:06.527388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.527415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.527517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.527544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.527691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.527716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.527845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.527871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.527972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.527997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.528129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.528154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.528258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.528284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.528387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.528413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.528550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.528575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.528679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.528705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.528838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.528864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.529020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.529045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.529201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.529225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.529334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.529360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.529464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.529490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.529602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.529627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.446 qpair failed and we were unable to recover it. 00:25:49.446 [2024-07-24 19:55:06.529788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.446 [2024-07-24 19:55:06.529813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.529946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.529971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.530102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.530128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.530288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.530314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.530474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.530499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.530606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.530631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.530739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.530765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.530925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.530951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.531052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.531077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.531184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.531210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.531350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.531376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.531488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.531513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.531643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.531668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.531797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.531822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.531957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.531982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.532091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.532116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.532252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.532278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.532409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.532434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.447 qpair failed and we were unable to recover it. 00:25:49.447 [2024-07-24 19:55:06.532535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.447 [2024-07-24 19:55:06.532564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.532667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.532694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.532825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.532850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.532984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.533009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.533116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.533141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.533247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.533273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.533394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.533420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.533521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.533546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.533689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.533713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.533841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.533867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.534000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.534025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.534130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.534155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.534288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.534314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.534445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.534470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.534587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.534612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.534739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.534764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.534898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.534924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.535035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.535060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.535164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.535190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.535318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.535344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.535445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.535470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.535574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.448 [2024-07-24 19:55:06.535600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.448 qpair failed and we were unable to recover it. 00:25:49.448 [2024-07-24 19:55:06.535706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.449 [2024-07-24 19:55:06.535731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.449 qpair failed and we were unable to recover it. 00:25:49.449 [2024-07-24 19:55:06.535836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.449 [2024-07-24 19:55:06.535861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.449 qpair failed and we were unable to recover it. 00:25:49.449 [2024-07-24 19:55:06.535989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.449 [2024-07-24 19:55:06.536014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.449 qpair failed and we were unable to recover it. 00:25:49.449 [2024-07-24 19:55:06.536121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.449 [2024-07-24 19:55:06.536146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.449 qpair failed and we were unable to recover it. 00:25:49.449 [2024-07-24 19:55:06.536247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.449 [2024-07-24 19:55:06.536272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.449 qpair failed and we were unable to recover it. 00:25:49.449 [2024-07-24 19:55:06.536406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.449 [2024-07-24 19:55:06.536431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.449 qpair failed and we were unable to recover it. 00:25:49.449 [2024-07-24 19:55:06.536537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.449 [2024-07-24 19:55:06.536562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.449 qpair failed and we were unable to recover it. 00:25:49.449 [2024-07-24 19:55:06.536663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.449 [2024-07-24 19:55:06.536687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.449 qpair failed and we were unable to recover it. 00:25:49.449 [2024-07-24 19:55:06.536818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.449 [2024-07-24 19:55:06.536844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.449 qpair failed and we were unable to recover it. 00:25:49.449 [2024-07-24 19:55:06.536954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.449 [2024-07-24 19:55:06.536979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.449 qpair failed and we were unable to recover it. 00:25:49.449 [2024-07-24 19:55:06.537091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.449 [2024-07-24 19:55:06.537115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.449 qpair failed and we were unable to recover it. 00:25:49.449 [2024-07-24 19:55:06.537248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.449 [2024-07-24 19:55:06.537275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.449 qpair failed and we were unable to recover it. 00:25:49.449 [2024-07-24 19:55:06.537390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.449 [2024-07-24 19:55:06.537415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.449 qpair failed and we were unable to recover it. 00:25:49.450 [2024-07-24 19:55:06.537523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.450 [2024-07-24 19:55:06.537548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.450 qpair failed and we were unable to recover it. 00:25:49.450 [2024-07-24 19:55:06.537683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.450 [2024-07-24 19:55:06.537707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.450 qpair failed and we were unable to recover it. 00:25:49.450 [2024-07-24 19:55:06.537817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.450 [2024-07-24 19:55:06.537842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.450 qpair failed and we were unable to recover it. 00:25:49.450 [2024-07-24 19:55:06.538003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.450 [2024-07-24 19:55:06.538027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.450 qpair failed and we were unable to recover it. 00:25:49.450 [2024-07-24 19:55:06.538128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.450 [2024-07-24 19:55:06.538152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.450 qpair failed and we were unable to recover it. 00:25:49.450 [2024-07-24 19:55:06.538288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.450 [2024-07-24 19:55:06.538317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.450 qpair failed and we were unable to recover it. 00:25:49.450 [2024-07-24 19:55:06.538450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.450 [2024-07-24 19:55:06.538475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.450 qpair failed and we were unable to recover it. 00:25:49.450 [2024-07-24 19:55:06.538604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.450 [2024-07-24 19:55:06.538629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.450 qpair failed and we were unable to recover it. 00:25:49.450 [2024-07-24 19:55:06.538760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.450 [2024-07-24 19:55:06.538784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.450 qpair failed and we were unable to recover it. 00:25:49.450 [2024-07-24 19:55:06.538916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.450 [2024-07-24 19:55:06.538940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.450 qpair failed and we were unable to recover it. 00:25:49.450 [2024-07-24 19:55:06.539094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.450 [2024-07-24 19:55:06.539118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.450 qpair failed and we were unable to recover it. 00:25:49.450 [2024-07-24 19:55:06.539251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.450 [2024-07-24 19:55:06.539276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.450 qpair failed and we were unable to recover it. 00:25:49.450 [2024-07-24 19:55:06.539405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.450 [2024-07-24 19:55:06.539430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.450 qpair failed and we were unable to recover it. 00:25:49.450 [2024-07-24 19:55:06.539563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.450 [2024-07-24 19:55:06.539587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.450 qpair failed and we were unable to recover it. 00:25:49.450 [2024-07-24 19:55:06.539714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.450 [2024-07-24 19:55:06.539738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.450 qpair failed and we were unable to recover it. 00:25:49.451 [2024-07-24 19:55:06.539872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.451 [2024-07-24 19:55:06.539897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.451 qpair failed and we were unable to recover it. 00:25:49.451 [2024-07-24 19:55:06.540028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.451 [2024-07-24 19:55:06.540052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.451 qpair failed and we were unable to recover it. 00:25:49.451 [2024-07-24 19:55:06.540210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.451 [2024-07-24 19:55:06.540235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.451 qpair failed and we were unable to recover it. 00:25:49.451 [2024-07-24 19:55:06.540380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.451 [2024-07-24 19:55:06.540406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.451 qpair failed and we were unable to recover it. 00:25:49.451 [2024-07-24 19:55:06.540546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.451 [2024-07-24 19:55:06.540570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.451 qpair failed and we were unable to recover it. 00:25:49.451 [2024-07-24 19:55:06.540703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.451 [2024-07-24 19:55:06.540730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.451 qpair failed and we were unable to recover it. 00:25:49.451 [2024-07-24 19:55:06.540860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.451 [2024-07-24 19:55:06.540885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.451 qpair failed and we were unable to recover it. 00:25:49.451 [2024-07-24 19:55:06.540987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.451 [2024-07-24 19:55:06.541012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.451 qpair failed and we were unable to recover it. 00:25:49.451 [2024-07-24 19:55:06.541135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.451 [2024-07-24 19:55:06.541159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.451 qpair failed and we were unable to recover it. 00:25:49.451 [2024-07-24 19:55:06.541263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.451 [2024-07-24 19:55:06.541288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.451 qpair failed and we were unable to recover it. 00:25:49.451 [2024-07-24 19:55:06.541401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.451 [2024-07-24 19:55:06.541426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.451 qpair failed and we were unable to recover it. 00:25:49.451 [2024-07-24 19:55:06.541585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.451 [2024-07-24 19:55:06.541610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.451 qpair failed and we were unable to recover it. 00:25:49.451 [2024-07-24 19:55:06.541737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.451 [2024-07-24 19:55:06.541762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.451 qpair failed and we were unable to recover it. 00:25:49.451 [2024-07-24 19:55:06.541888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.451 [2024-07-24 19:55:06.541914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.451 qpair failed and we were unable to recover it. 00:25:49.452 [2024-07-24 19:55:06.542044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.452 [2024-07-24 19:55:06.542069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.452 qpair failed and we were unable to recover it. 00:25:49.452 [2024-07-24 19:55:06.542202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.452 [2024-07-24 19:55:06.542227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.452 qpair failed and we were unable to recover it. 00:25:49.452 [2024-07-24 19:55:06.542363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.452 [2024-07-24 19:55:06.542387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.452 qpair failed and we were unable to recover it. 00:25:49.452 [2024-07-24 19:55:06.542503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.452 [2024-07-24 19:55:06.542528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.452 qpair failed and we were unable to recover it. 00:25:49.452 [2024-07-24 19:55:06.542630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.452 [2024-07-24 19:55:06.542654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.452 qpair failed and we were unable to recover it. 00:25:49.452 [2024-07-24 19:55:06.542783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.452 [2024-07-24 19:55:06.542807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.452 qpair failed and we were unable to recover it. 00:25:49.452 [2024-07-24 19:55:06.542912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.452 [2024-07-24 19:55:06.542936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.452 qpair failed and we were unable to recover it. 00:25:49.452 [2024-07-24 19:55:06.543033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.452 [2024-07-24 19:55:06.543057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.452 qpair failed and we were unable to recover it. 00:25:49.452 [2024-07-24 19:55:06.543189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.452 [2024-07-24 19:55:06.543214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.452 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.543335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.543360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.543461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.543485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.543589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.543614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.543712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.543737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.543847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.543871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.544005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.544030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.544144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.544168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.544280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.544311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.544444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.544468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.544577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.544602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.544706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.544730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.544835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.544859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.544993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.545018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.545151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.545176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.545277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.545303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.545418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.545443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.545550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.545575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.545731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.453 [2024-07-24 19:55:06.545755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.453 qpair failed and we were unable to recover it. 00:25:49.453 [2024-07-24 19:55:06.545884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.545908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.546041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.546065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.546178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.546202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.546388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.546415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.546525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.546549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.546652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.546676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.546807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.546833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.546939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.546964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.547128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.547153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.547258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.547284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.547412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.547436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.547561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.547585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.547690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.547714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.547824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.547848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.547978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.548003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.548148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.548173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.548278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.548303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.548409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.548436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.548547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.548572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.548718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.548743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.548852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.548877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.548975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.548999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.549132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.549158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.549262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.549287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.549388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.549413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.549545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.549569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.549665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.549690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.549831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.549855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.549965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.549991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.550121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.550149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.550258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.550284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.550419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.550445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.550552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.550577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.550717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.550741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.550866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.550890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.551029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.551053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.454 [2024-07-24 19:55:06.551184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.454 [2024-07-24 19:55:06.551208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.454 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.551349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.551375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.551513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.551537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.551638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.551664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.551770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.551795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.551927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.551951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.552075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.552099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.552238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.552267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.552367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.552393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.552526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.552551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.552690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.552716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.552835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.552860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.552969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.552995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.553163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.553188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.553295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.553319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.553431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.553456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.553566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.553591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.553705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.553733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.553845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.553869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.554001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.554025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.554164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.554189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.554296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.554322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.554480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.554505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.554612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.554637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.554770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.554795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.554927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.554952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.555061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.555086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.555193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.555219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.555339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.555364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.555495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.555521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.555653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.555678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.555808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.555832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.555968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.555993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.556153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.556178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.556307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.556332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.556440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.556465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.556595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.556619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.455 [2024-07-24 19:55:06.556755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.455 [2024-07-24 19:55:06.556780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.455 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.556879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.556904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.557008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.557034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.557169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.557194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.557327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.557352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.557463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.557487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.557598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.557622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.557752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.557778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.557909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.557933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.558064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.558088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.558219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.558264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.558400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.558425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.558530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.558554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.558646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.558670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.558808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.558832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.558934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.558958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.559089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.559114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.559260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.559287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.559450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.559474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.559582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.559607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.559740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.559765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.559897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.559921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.560049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.560074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.560180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.560208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.560374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.560400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.560505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.560531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.560661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.560686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.560795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.560821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.560936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.560960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.561123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.561148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.561292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.561318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.561430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.561455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.561563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.561588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.561713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.561737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.561865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.561889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.561994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.562019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.562145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.456 [2024-07-24 19:55:06.562170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.456 qpair failed and we were unable to recover it. 00:25:49.456 [2024-07-24 19:55:06.562300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.562325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.562456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.562480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.562606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.562632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.562760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.562784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.562889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.562914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.563019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.563044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.563176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.563200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.563339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.563363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.563468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.563494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.563600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.563624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.563732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.563757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.563859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.563883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.564009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.564033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.564169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.564193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.564315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.564340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.564477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.564501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.564629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.564654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.564794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.564818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.564922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.564947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.565051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.565076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.565202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.565227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.565365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.565391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.565502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.565527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.565625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.565649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.565783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.565807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.565916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.565940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.566079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.566109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.566261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.566289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.566419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.566444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.566571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.566597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.566729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.566754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.566859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.566884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.457 qpair failed and we were unable to recover it. 00:25:49.457 [2024-07-24 19:55:06.566993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.457 [2024-07-24 19:55:06.567018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.567128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.567152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.567273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.567299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.567404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.567429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.567587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.567612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.567775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.567800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.567909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.567934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.568043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.568068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.568199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.568224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.568390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.568416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.568548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.568573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.568705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.568730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.568835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.568860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.568989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.569014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.569115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.569139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.569273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.569298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.569430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.569454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.569597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.569622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.569779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.569804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.569904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.569929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.570071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.570096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.570210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.570235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.570352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.570377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.570478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.570503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.570638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.570663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.458 [2024-07-24 19:55:06.570798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.458 [2024-07-24 19:55:06.570823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.458 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.570948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.570973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.571077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.571103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.571233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.571264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.571395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.571420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.571527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.571553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.571665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.571690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.571826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.571851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.571951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.571976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.572106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.572136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.572268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.572294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.572418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.572443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.572548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.572573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.572678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.572703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.572846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.572871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.573005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.573030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.573137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.573162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.573294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.573320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.573430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.573455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.573583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.573608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.573715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.573741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.573868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.573893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.574027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.574052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.574211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.574236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.574410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.574436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.574541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.574566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.459 qpair failed and we were unable to recover it. 00:25:49.459 [2024-07-24 19:55:06.574719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.459 [2024-07-24 19:55:06.574745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.574848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.574873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.574971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.574996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.575104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.575129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.575258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.575283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.575386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.575411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.575545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.575570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.575706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.575732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.575874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.575899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.576031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.576057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.576169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.576194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.576325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.576351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.576461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.576487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.576595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.576621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.576734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.576761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.576862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.576888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.577052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.577077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.577187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.577213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.577336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.577362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.577492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.577516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.577649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.577674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.577819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.577844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.577953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.577977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.578134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.578164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.578298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.578325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.578438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.578463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.578643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.578668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.578778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.578802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.578908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.578932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.579047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.460 [2024-07-24 19:55:06.579071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.460 qpair failed and we were unable to recover it. 00:25:49.460 [2024-07-24 19:55:06.579199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.579223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.579364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.579390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.579494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.579518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.579623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.579649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.579757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.579781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.579907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.579932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.580063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.580089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.580231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.580262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.580375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.580400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.580554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.580579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.580686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.580712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.580825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.580849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.580980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.581006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.581113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.581138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.581288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.581314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.581417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.581442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.581571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.581596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.581706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.581732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.581839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.581863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.581992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.582017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.582128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.582152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.582262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.582288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.582406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.582430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.582600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.582624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.582730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.582755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.582859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.582884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.583033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.583059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.583188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.583212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.583383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.583408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.583543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.583570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.583703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.583728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.583849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.583873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.583987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.584012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.584142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.584171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.584273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.584299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.584430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.584455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.584588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.584613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.584712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.584736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.584849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.584873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.584975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.584999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.461 [2024-07-24 19:55:06.585099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.461 [2024-07-24 19:55:06.585124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.461 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.585233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.585268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.585427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.585452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.585554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.585579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.585689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.585713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.585815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.585840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.585955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.585980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.586129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.586155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.586313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.586339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.586450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.586476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.586609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.586636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.586798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.586823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.586930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.586955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.587051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.587075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.587223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.587271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.587417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.587446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.587554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.587581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.587693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.587719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.587825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.587851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.588007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.588033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.588173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.588199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.588312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.588339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.588474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.588500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.588662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.588688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.588821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.588847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.588956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.588983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.589095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.589120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.589252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.589277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.589380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.589406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.589519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.589545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.589677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.589702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.589834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.589859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.589968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.589993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.590128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.590157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.590271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.590299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.590432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.590458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.590594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.590620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.590785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.590811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.590916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.590942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.591050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.591075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.591231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.591263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.591423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.591448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.591552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.591578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.591737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.591763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.591894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.462 [2024-07-24 19:55:06.591920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.462 qpair failed and we were unable to recover it. 00:25:49.462 [2024-07-24 19:55:06.592032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.592057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.592188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.592213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.592355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.592381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.592496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.592522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.592662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.592688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.592791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.592817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.592952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.592978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.593083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.593108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.593215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.593246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.593383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.593409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.593520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.593545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.593675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.593700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.593830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.593856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.593956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.593983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.594095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.594120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.594231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.594265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.594394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.594420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.594547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.594573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.594709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.594735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.594878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.594904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.595011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.595039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.595183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.595212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.595355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.595382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.595483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.595508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.595665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.595691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.595821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.595845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.595975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.596001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.596102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.596128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.596233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.596268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.596379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.596404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.596534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.596558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.596658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.596683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.596841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.596866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.596996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.597021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.597162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.597187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.597289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.597314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.597416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.597440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.597581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.597605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.597740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.597765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.597862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.597887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.598019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.598045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.598208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.598237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.598365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.598391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.598524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.598550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.598695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.598721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.598851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.598876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.598977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.599003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.599132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.599158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.599291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.599318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.599428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.599454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.599582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.599607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.599708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.599733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.599865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.599891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.600048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.600074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.600213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.463 [2024-07-24 19:55:06.600239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.463 qpair failed and we were unable to recover it. 00:25:49.463 [2024-07-24 19:55:06.600377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.600403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.600506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.600532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.600660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.600686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.600812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.600838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.600973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.600998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.601157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.601182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.601296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.601325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.601462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.601487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.601580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.601605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.601737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.601762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.601896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.601921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.602055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.602080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.602188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.602213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.602323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.602352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.602458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.602484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.602616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.602640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.602800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.602824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.602956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.602982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.603090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.603119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.603285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.603312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.603418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.603444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.603552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.603578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.603683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.603709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.603856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.603882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.604018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.604044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.604151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.604177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.604281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.604307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.604452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.604478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.604612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.604638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.604744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.604769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.604877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.604902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.605049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.605074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.605209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.605234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.605346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.605371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.605520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.605545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.605673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.605699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.605838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.605863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.605964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.605989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.606094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.606119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.606278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.606304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.606417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.606444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.606560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.606586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.606689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.606714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.606844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.606869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.607001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.607026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.607132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.607157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.607260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.607288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.607434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.607460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.607567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.607594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.607754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.607780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.607912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.607937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.608095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.608120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.608280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.608306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.608407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.608437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.608581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.608607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.608742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.608768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.608877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.608902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.609039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.609065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.609226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.609256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.609358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.609384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.609520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.609547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.464 qpair failed and we were unable to recover it. 00:25:49.464 [2024-07-24 19:55:06.609688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.464 [2024-07-24 19:55:06.609714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.609849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.609875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.610001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.610026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.610162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.610187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.610293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.610319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.610480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.610504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.610610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.610636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.610768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.610792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.610930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.610956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.611089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.611114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.611221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.611256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.611387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.611412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.611538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.611563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.611693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.611718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.611821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.611846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.611978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.612004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.612135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.612160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.612286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.612311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.612420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.612445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.612588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.612612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.612720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.612746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.612849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.612876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.613010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.613036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.613197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.613223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.613353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.613378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.613505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.613530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.613632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.613657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.613758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.613784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.613885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.613911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.614016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.614042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.614181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.614205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.614342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.614367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.614499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.614529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.614635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.614661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.614793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.614819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.614945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.614970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.615078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.615104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.615261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.615287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.615445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.615469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.615589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.615615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.615753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.615780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.615908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.615933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.616033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.616059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.616166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.616191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.616314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.616340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.616470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.616495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.616602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.616628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.616731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.616755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.616892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.616918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.465 [2024-07-24 19:55:06.617030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.465 [2024-07-24 19:55:06.617055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.465 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.617162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.617188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.617296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.617321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.617426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.617451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.617555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.617581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.617682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.617707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.617806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.617833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.617966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.617990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.618126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.618150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.618281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.618307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.618447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.618472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.618604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.618630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.618762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.618787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.618917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.618942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.619101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.619126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.619231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.619262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.619393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.619419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.619536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.619562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.619724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.619749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.619855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.619881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.619983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.620009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.620141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.620167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.620280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.620306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.620446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.620476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.620636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.620662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.620778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.620802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.620928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.620953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.621059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.621084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.621192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.621217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.621381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.621407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.621545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.621570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.621670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.621695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.621830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.621855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.621984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.622008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.622144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.622169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.622295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.622320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.622476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.622501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.622624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.622650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.622810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.622836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.622942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.622968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.623102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.623128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.623270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.623297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.623408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.623432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.623558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.623583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.623710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.623737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.623867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.623892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.466 [2024-07-24 19:55:06.624023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.466 [2024-07-24 19:55:06.624048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.466 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.624150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.624175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.624280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.624306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.624415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.624440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.624575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.624601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.624729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.624754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.624912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.624937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.625066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.625091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.625209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.625235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.625353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.625379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.625511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.625536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.625636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.625662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.625798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.625824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.625952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.625977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.626108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.626133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.626266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.626292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.626400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.626426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.626533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.626563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.626660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.626686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.626819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.626844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.627004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.627029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.627163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.627189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.627325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.627350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.627453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.627478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.627585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.627610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.627743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.627769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.627871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.627896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.628028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.628052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.628160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.628186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.628345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.628371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.628503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.628529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.628668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.628693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.628822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.628847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.628962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.628987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.629083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.629108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.629246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.629272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.629406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.629430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.629543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.629570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.629728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.629753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.629894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.629918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.630023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.630048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.630181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.630207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.630355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.630380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.630542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.630568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.630614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c3230 (9): Bad file descriptor 00:25:49.467 [2024-07-24 19:55:06.630802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.630840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.630955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.630982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.631120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.631146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.631260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.631287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.631393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.631420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.631550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.631576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.631675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.631700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.631830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.631855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.631965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.631990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.632126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.632152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.632293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.632318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.632475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.632500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.632632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.632657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.632800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.632827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.632957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.632982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.633117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.633146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.633321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.633348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.633454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.633480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.633617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.633643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.633779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.467 [2024-07-24 19:55:06.633805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.467 qpair failed and we were unable to recover it. 00:25:49.467 [2024-07-24 19:55:06.633946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.633971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.634085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.634111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.634216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.634248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.634402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.634427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.634566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.634591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.634721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.634746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.634891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.634920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.635057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.635084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.635195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.635220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.635363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.635402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.635572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.635598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.635734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.635760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.635860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.635886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.635996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.636023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.636137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.636162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.636314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.636353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.636495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.636522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.636631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.636656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.636788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.636814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.636914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.636939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.637052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.637078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.637217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.637250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.637366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.637393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.637524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.637550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.637681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.637707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.637841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.637866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.638000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.638025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.638128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.638153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.638263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.638289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.638394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.638420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.638538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.638563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.638688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.638726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.638857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.638883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.638985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.639011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.639128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.639153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.639269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.639295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.639402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.639429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.639591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.639616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.639715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.639740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.639845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.639872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.640023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.640050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.640161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.640186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.640333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.640359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.640493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.640519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.640651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.640676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.640782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.640807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.640951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.640978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.641094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.641120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.641251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.641276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.641408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.641433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.641565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.641591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.641720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.641746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.641901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.641927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.642055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.642080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.642223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.642253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.642390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.642416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.642548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.642573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.642686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.642711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.642811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.642836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.642939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.642964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.643123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.643152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.643282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.468 [2024-07-24 19:55:06.643308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.468 qpair failed and we were unable to recover it. 00:25:49.468 [2024-07-24 19:55:06.643416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.643441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.643539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.643563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.643663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.643688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.643790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.643816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.643964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.644003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.644124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.644150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.644298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.644324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.644458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.644484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.644621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.644646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.644756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.644780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.644933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.644959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.645093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.645118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.645231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.645263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.645370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.645396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.645554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.645578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.645710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.645735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.645863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.645889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.646000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.646024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.646132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.646158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.646320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.646346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.646451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.646476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.646607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.646633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.646741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.646767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.646901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.646928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.647089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.647114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.647266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.647304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.647424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.647450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.647558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.647583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.647689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.647714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.647807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.647832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.647934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.647959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.648072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.648099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.648234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.648265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.648397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.648422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.648582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.648608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.469 qpair failed and we were unable to recover it. 00:25:49.469 [2024-07-24 19:55:06.648714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.469 [2024-07-24 19:55:06.648739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.648837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.648862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.648965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.648990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.649096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.649121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.649239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.649271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.649379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.649404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.649560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.649585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.649711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.649736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.649860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.649885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.649984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.650009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.650116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.650141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.650287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.650317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.650430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.650455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.650560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.650585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.650703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.650730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.650890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.650915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.651077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.651103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.651211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.651237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.651378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.651403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.651533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.651558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.651654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.651680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.651789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.651816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.651920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.651945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.652064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.652103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.652254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.652281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.652414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.652439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.652586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.652612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.652739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.652764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.652875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.652902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.653020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.653047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.653175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.653201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.653348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.653374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.653507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.653532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.653637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.653662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.653762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.653787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.653917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.653943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.654078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.470 [2024-07-24 19:55:06.654104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.470 qpair failed and we were unable to recover it. 00:25:49.470 [2024-07-24 19:55:06.654247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.654273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.654410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.654436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.654565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.654591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.654693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.654718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.654826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.654851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.654979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.655004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.655108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.655133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.655239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.655284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.655419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.655444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.655550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.655576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.655705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.655730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.655865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.655890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.655997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.656023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.656156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.656181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.656281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.656307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.656437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.656462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.656591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.656616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.656720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.656745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.656844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.656868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.656993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.657018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.657129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.657155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.657270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.657296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.657423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.657448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.657606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.657631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.657734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.657759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.657869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.657894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.658055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.658080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.658182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.658208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.658322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.658347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.658473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.658499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.658631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.658656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.658786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.658811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.658939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.658964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.659075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.659100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.659200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.659230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.471 qpair failed and we were unable to recover it. 00:25:49.471 [2024-07-24 19:55:06.659402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.471 [2024-07-24 19:55:06.659427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.659554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.659579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.659687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.659711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.659842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.659867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.659965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.659990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.660121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.660146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.660278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.660314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.660428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.660454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.660546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.660571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.660672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.660697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.660809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.660834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.660960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.660985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.661141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.661166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.661303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.661329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.661428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.661453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.661552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.661577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.661686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.661711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.661845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.661870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.662012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.662037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.662164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.662189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.662293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.662319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.662451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.662476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.662613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.662638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.662749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.662774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.662898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.662922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.663051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.663076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.663204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.663229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.663341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.663366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.663493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.663518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.663645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.663670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.663817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.663842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.663974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.663999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.664124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.664149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.664278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.664304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.664441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.664466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.664573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.664598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.664757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.664782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.664908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.664933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.665043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.665068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.665179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.665204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.665342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.665371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.665498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.665522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.665653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.665677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.665809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.665835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.665965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.665990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.666131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.666156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.666290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.666315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.666447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.666472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.666634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.666658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.666757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.666782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.666886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.666911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.667066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.667091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.667197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.667223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.667375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.667401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.667538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.667564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.472 [2024-07-24 19:55:06.667703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.472 [2024-07-24 19:55:06.667728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.472 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.667861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.667886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.667997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.668022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.668147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.668172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.668278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.668304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.668406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.668431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.668532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.668558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.668688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.668713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.668816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.668840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.668940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.668965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.669067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.669092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.669222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.669252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.669358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.669383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.669499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.669525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.669617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.669642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.669767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.669792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.669897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.669922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.670031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.670056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.670161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.670185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.670296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.670321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.670425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.670451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.670563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.670588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.670686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.670712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.670844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.670869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.670969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.670994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.671101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.671127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.671246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.671285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.671404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.671431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.671568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.671594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.671728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.671754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.671856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.671881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.672029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.672054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.672179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.672204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.672358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.672384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.672523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.672549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.672649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.672674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.672809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.672834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.672933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.672958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.673094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.673118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.673260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.673291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.673424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.673451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.673569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.673595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.673725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.673750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.673873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.673899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.674057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.674082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.674194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.674219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.674331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.674358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.674499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.674527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.674660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.674685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.674817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.674842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.674945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.674970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.675097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.473 [2024-07-24 19:55:06.675121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.473 qpair failed and we were unable to recover it. 00:25:49.473 [2024-07-24 19:55:06.675263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.675289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.675423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.675449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.675557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.675582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.675690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.675715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.675816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.675841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.675942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.675967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.676067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.676094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.676205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.676233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.676375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.676400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.676513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.676538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.676644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.676671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.676803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.676829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.676960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.676985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.677120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.677147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.677283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.677313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.677446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.677471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.677606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.677631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.677764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.677789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.677904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.677929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.678023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.678048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.678151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.678176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.678307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.678333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.678472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.678499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.678603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.678628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.678742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.678768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.678876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.678903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.679030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.679055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.679192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.679218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.679344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.679371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.679482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.679507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.679647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.679673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.679779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.679805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.679941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.679967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.680139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.680164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.680297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.680323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.680461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.680486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.680597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.680623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.680724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.680751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.680908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.680933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.681037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.681062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.681196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.681221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.681368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.681398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.681512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.681539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.681672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.681698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.681798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.681824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.681943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.681968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.682103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.682130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.682280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.682306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.682433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.682458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.682563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.682588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.682700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.682725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.682861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.682886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.682991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.683017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.683119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.683143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.683304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.683329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.683436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.683462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.683622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.683647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.683775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.683800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.683935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.683962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.684094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.684121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.474 qpair failed and we were unable to recover it. 00:25:49.474 [2024-07-24 19:55:06.684230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.474 [2024-07-24 19:55:06.684262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.684374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.684400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.684525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.684550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.684678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.684704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.684865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.684891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.685027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.685052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.685185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.685210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.685328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.685354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.685511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.685540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.685648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.685675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.685842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.685867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.685999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.686023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.686124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.686149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.686270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.686308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.686422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.686449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.686556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.686582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.686742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.686768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.686883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.686908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.687012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.687039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.687161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.687188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.687299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.687325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.687435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.687460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.687596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.687621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.687753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.687780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.687884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.687909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.688074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.688099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.688200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.688224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.688364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.688389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.688525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.688550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.688652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.688677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.688803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.688828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.688953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.688978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.689083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.689107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.689249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.689274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.689407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.689433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.689531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.689556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.689692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.689719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.689845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.689870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.690030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.690055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.690186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.690211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.690316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.690341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.690470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.690495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.690625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.690650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.690756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.690781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.690903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.690928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.691034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.691060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.691214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.691265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.691407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.691434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.691563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.691589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.691736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.691763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.691865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.691891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.692025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.692051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.692150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.692175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.692282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.692308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.692437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.692462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.692593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.692618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.692726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.692752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.692869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.692894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.693025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.475 [2024-07-24 19:55:06.693050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.475 qpair failed and we were unable to recover it. 00:25:49.475 [2024-07-24 19:55:06.693145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.693170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.693273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.693298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.693408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.693436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.693547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.693573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.693714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.693740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.693842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.693868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.693996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.694022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.694153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.694180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.694314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.694341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.694480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.694505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.694607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.694632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.694737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.694762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.694897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.694922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.695057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.695082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.695183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.695211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.695387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.695413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.695526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.695551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.695690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.695718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.695832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.695857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.696018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.696043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.696176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.696201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.696344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.696371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.696504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.696530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.696669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.696694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.696828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.696853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.696946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.696972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.697085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.697112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.697252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.697277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.697386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.697411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.697513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.697538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.697681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.697706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.697840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.697865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.698004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.698029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.698141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.698167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.698302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.698329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.698437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.698462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.698603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.698628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.698737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.698763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.698897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.476 [2024-07-24 19:55:06.698922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.476 qpair failed and we were unable to recover it. 00:25:49.476 [2024-07-24 19:55:06.699055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.699080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.699208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.699233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.699379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.699404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.699540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.699566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.699666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.699690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.699788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.699817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.699942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.699967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.700069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.700093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.700215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.700247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.700382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.700407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.700513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.700537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.700666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.700691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.700802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.700827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.700984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.701009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.701141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.701167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.701273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.701299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.701406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.701432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.701567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.701592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.701699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.701724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.701864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.701889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.701988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.702013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.702141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.702165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.702268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.702294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.702390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.702415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.702546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.702571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.702678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.702703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.702865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.702890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.703015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.703040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.703152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.703177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.703305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.703330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.703460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.703486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.703638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.703664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.703767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.703796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.703902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.703927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.704057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.704083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.704211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.704236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.704374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.704399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.704512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.704537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.704641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.704666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.704812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.704837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.704957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.704981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.705087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.705112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.705254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.705280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.705394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.705419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.705535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.705573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.705726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.705765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.705913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.705941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.706074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.706100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.706233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.706268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.477 [2024-07-24 19:55:06.706381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.477 [2024-07-24 19:55:06.706406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.477 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.706578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.706603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.706731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.706756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.706892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.706917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.707051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.707077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.707206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.707231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.707396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.707421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.707520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.707546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.707678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.707703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.707843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.707867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.707968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.707998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.708102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.708127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.708240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.708278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.708375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.708400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.708511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.708536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.708637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.708662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.708766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.708792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.708922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.708947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.709050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.709075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.709245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.709273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.709413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.709439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.709544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.709569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.709723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.709748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.709853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.709877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.710015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.710040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.710154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.710180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.710296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.710334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.710474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.710501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.710613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.710640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.710769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.710795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.710932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.710958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.711114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.711141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.711307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.711333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.711464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.711489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.711600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.711625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.711725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.711750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.711887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.711913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.712013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.712043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.712185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.712223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.712351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.712379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.712523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.712549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.712687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.712712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.712819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.712844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.712994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.713020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.713126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.713154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.713267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.713293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.713450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.713475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.713579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.713603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.713715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.713740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.713865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.713890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.714052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.714079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.714195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.714220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.478 [2024-07-24 19:55:06.714349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.478 [2024-07-24 19:55:06.714387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.478 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.714543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.714570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.714673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.714699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.714832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.714857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.714989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.715015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.715161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.715187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.715365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.715403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.715542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.715569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.715707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.715733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.715831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.715856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.715980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.716007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.716123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.716148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.716270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.716303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.716413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.716439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.716575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.716600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.716726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.716751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.716851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.716878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.716990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.717016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.717172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.717198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.717331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.717369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.717508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.717536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.717641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.717666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.717766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.717791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.717910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.717936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.718091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.718130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.718250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.718278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.718441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.718466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.718624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.718649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.718782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.718807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.718936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.718961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.719070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.719096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.719193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.719219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.719359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.719385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.719497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.719522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.719632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.719657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.719768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.719795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.719922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.719947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.720050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.720075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.720205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.720230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.720341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.720372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.720509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.720534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.720662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.720686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.720781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.720806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.720934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.720959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.721056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.721081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.721225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.721255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.721391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.721417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.721521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.721547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.721648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.479 [2024-07-24 19:55:06.721674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.479 qpair failed and we were unable to recover it. 00:25:49.479 [2024-07-24 19:55:06.721771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.721796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.721910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.721936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.722052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.722091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.722246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.722296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.722444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.722471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.722609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.722637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.722738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.722770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.722881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.722907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.723041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.723067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.723225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.723255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.723391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.723417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.723556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.723582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.723716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.723741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.723845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.723870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.723984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.724022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.724155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.724180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.724329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.724354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.724494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.724523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.724633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.724658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.724765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.724790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.724918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.724943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.725052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.725078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.725210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.725234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.725385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.725410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.725512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.725537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.725647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.725671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.725771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.725797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.725925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.725951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.726086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.726111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.726249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.726277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.726417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.726442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.726577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.726603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.726710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.726735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.726870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.726895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.727016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.727042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.727174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.727201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.727339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.727376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.727530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.727555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.727685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.727710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.727845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.727870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.727977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.728003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.728103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.728128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.728251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.728276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.728428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.728468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.728610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.728642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.728778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.728804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.728907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.728933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.729066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.729092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.729225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.729257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.729435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.729461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.729566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.729595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.729704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.729731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.729843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.729870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.729982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.480 [2024-07-24 19:55:06.730007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.480 qpair failed and we were unable to recover it. 00:25:49.480 [2024-07-24 19:55:06.730151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.730190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.730340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.730378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.730511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.730536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.730666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.730691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.730823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.730847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.730988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.731012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.731117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.731142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.731246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.731271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.731385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.731410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.731519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.731543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.731648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.731673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.731779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.731803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.731942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.731967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.732083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.732121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.732294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.732330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.732467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.732493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.732603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.732629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.732736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.732762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.732906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.732931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.733036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.733061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.733184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.733223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.733369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.733408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.733544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.733570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.733702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.733728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.733833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.733858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.733977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.734002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.734134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.734160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.734307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.734346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.734489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.734516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.734645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.734671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.734810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.734835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.735003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.735028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.735162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.735200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.735352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.735379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.735497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.735523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.735636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.735662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.735793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.735818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.735927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.735952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.736089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.736115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.736246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.736272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.736371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.736396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.736499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.736525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.736631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.736656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.736814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.736840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.736982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.737008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.737151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.737190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.737340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.737377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.737523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.737549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.481 qpair failed and we were unable to recover it. 00:25:49.481 [2024-07-24 19:55:06.737670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.481 [2024-07-24 19:55:06.737696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.737803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.737830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.737975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.738014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.738131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.738158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.738265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.738302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.738448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.738474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.738582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.738609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.738742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.738767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.738877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.738903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.739009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.739035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.739151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.739177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.739283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.739310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.739442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.739467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.739602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.739628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.739731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.739757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.739895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.739921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.740081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.740107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.740209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.740235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.740371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.740396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.740521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.740547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.740657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.740682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.740817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.740843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.740992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.741031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.741156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.741182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.741288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.741322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.741432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.741458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.741563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.741588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.741684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.741709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.741842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.741869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.742007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.742033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.742138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.742163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.742268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.742304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.742437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.742463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.742574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.742599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.742704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.742738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.742879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.742905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.743036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.743066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.743177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.743205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.743365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.743404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.743515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.743542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.482 qpair failed and we were unable to recover it. 00:25:49.482 [2024-07-24 19:55:06.743679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.482 [2024-07-24 19:55:06.743706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.743840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.743866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.743978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.744004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.744160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.744186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.744314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.744352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.744473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.744511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.744623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.744649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.744783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.744809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.744916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.744942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.745048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.745073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.745182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.745208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.745356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.745385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.745491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.745517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.745659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.745684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.745809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.745834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.745940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.745965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.746076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.746101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.746205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.746230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.746346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.746371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.746496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.746521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.746626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.746651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.746754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.746779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.746910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.746935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.747065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.747095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.747197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.747222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.747349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.747386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.747522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.747552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.747662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.747687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.747788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.747813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.483 [2024-07-24 19:55:06.747916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.483 [2024-07-24 19:55:06.747940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.483 qpair failed and we were unable to recover it. 00:25:49.768 [2024-07-24 19:55:06.748039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.768 [2024-07-24 19:55:06.748063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.768 qpair failed and we were unable to recover it. 00:25:49.768 [2024-07-24 19:55:06.748161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.768 [2024-07-24 19:55:06.748188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.768 qpair failed and we were unable to recover it. 00:25:49.768 [2024-07-24 19:55:06.748297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.768 [2024-07-24 19:55:06.748324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.768 qpair failed and we were unable to recover it. 00:25:49.768 [2024-07-24 19:55:06.748445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.768 [2024-07-24 19:55:06.748484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.768 qpair failed and we were unable to recover it. 00:25:49.768 [2024-07-24 19:55:06.748590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.768 [2024-07-24 19:55:06.748617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.768 qpair failed and we were unable to recover it. 00:25:49.768 [2024-07-24 19:55:06.748771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.768 [2024-07-24 19:55:06.748797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.768 qpair failed and we were unable to recover it. 00:25:49.768 [2024-07-24 19:55:06.748907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.768 [2024-07-24 19:55:06.748934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.768 qpair failed and we were unable to recover it. 00:25:49.768 [2024-07-24 19:55:06.749056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.749084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.749229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.749264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.749372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.749398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.749506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.749532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.749631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.749657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.749764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.749790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.749948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.749973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.750080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.750105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.750269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.750297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.750408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.750436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.750566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.750591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.750697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.750721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.750828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.750853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.750961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.750990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.751130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.751155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.751292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.751340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.751448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.751474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.751580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.751605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.751737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.751764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.751898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.751923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.752026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.752051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.752192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.752219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.752337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.752366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.752469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.752493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.752629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.752656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.752761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.752786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.752925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.752953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.753068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.753094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.753199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.753223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.753338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.753363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.753503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.753527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.753626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.753652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.753782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.753806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.753940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.753964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.754060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.754085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.754233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.754279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.754416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.754443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.754580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.754606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.754708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.754733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.754865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.754891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.755008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.755037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.755163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.755191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.769 qpair failed and we were unable to recover it. 00:25:49.769 [2024-07-24 19:55:06.755305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.769 [2024-07-24 19:55:06.755330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.755433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.755459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.755617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.755642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.755755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.755780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.755895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.755920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.756054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.756079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.756238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.756269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.756401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.756426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.756544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.756568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.756694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.756719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.756848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.756871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.757040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.757068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.757180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.757206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.757320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.757347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.757456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.757481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.757575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.757601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.757707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.757732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.757875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.757903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.758011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.758035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.758144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.758168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.758337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.758363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.758501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.758525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.758649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.758674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.758805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.758830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.758934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.758964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.759064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.759089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.759227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.759261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.759397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.759422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.759554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.759581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.759738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.759764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.759892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.759916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.760014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.760039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.760170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.760199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.760344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.760370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.760517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.760542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.760699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.760723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.760882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.760905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.761046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.761071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.761184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.761209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.761373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.761398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.761503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.761545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.761686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.761710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.761818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.761844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.761948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.761973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.762104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.762128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.770 [2024-07-24 19:55:06.762249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.770 [2024-07-24 19:55:06.762288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.770 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.762414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.762440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.762544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.762570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.762704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.762731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.762849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.762888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.763065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.763104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.763222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.763255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.763361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.763386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.763488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.763512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.763617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.763642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.763753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.763778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.763912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.763936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.764042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.764068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.764215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.764239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.764364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.764389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.764500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.764524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.764659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.764684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.764783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.764808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.764952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.764991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.765093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.765125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.765269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.765308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.765416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.765442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.765557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.765584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.765690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.765716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.765822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.765849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.765977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.766001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.766160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.766184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.766278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.766303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.766412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.766436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.766539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.766565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.766696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.766721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.766816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.766840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.766945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.766971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.767083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.767110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.767234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.767280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.767428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.767458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.767616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.767643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.767745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.767770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.767883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.767908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.768014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.768039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.768174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.768199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.768334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.768372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.768513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.768539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.768643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.768669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.768825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.768849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.768946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.768971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.769076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.771 [2024-07-24 19:55:06.769107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.771 qpair failed and we were unable to recover it. 00:25:49.771 [2024-07-24 19:55:06.769213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.769240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.769383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.769409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.769541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.769566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.769667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.769693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.769827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.769852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.769958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.769984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.770109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.770134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.770263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.770290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.770406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.770432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.770529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.770554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.770662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.770687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.770847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.770872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.771001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.771026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.771195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.771233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.771352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.771379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.771520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.771545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.771674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.771700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.771798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.771822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.771935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.771961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.772084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.772109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.772257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.772295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.772403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.772430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.772541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.772567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.772701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.772727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.772871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.772909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.773019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.773047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.773182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.773209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.773334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.773359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.773463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.773487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.773601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.773625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.773735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.773760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.773861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.773886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.774015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.774039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.774148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.774173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.774331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.774355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.774452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.774477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.774610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.774636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.772 [2024-07-24 19:55:06.774744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.772 [2024-07-24 19:55:06.774769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.772 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.774880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.774904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.775007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.775036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.775193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.775217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.775356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.775396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.775506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.775533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.775647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.775672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.775778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.775805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.775926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.775955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.776124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.776150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.776283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.776309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.776417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.776443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.776555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.776594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.776714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.776741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.776850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.776876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.777006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.777032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.777162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.777200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.777361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.777388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.777517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.777545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.777715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.777762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.777904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.777950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.778081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.778109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.778249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.778277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.778437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.778463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.778662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.778715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.778968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.779020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.779131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.779159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.779323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.779349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.779450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.779477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.779632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.779662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.779810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.779858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.779984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.780030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.780143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.780171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.780338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.780364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.780495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.780520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.780654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.780679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.780811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.780839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.781008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.781036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.781183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.781211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.781369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.781395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.781500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.781525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.781645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.781673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.781806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.781831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.781933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.781958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.782063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.782088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.782217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.782253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.782388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.782412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.782522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.782563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.782674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.782702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.782810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.782838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.782951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.782978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.773 [2024-07-24 19:55:06.783149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.773 [2024-07-24 19:55:06.783177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.773 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.783295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.783321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.783454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.783479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.783610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.783635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.783749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.783776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.783924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.783956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.784108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.784135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.784311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.784349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.784490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.784517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.784626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.784654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.784816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.784844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.784958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.784983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.785135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.785160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.785274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.785301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.785440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.785466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.785600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.785625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.785760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.785785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.785916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.785942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.786050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.786076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.786222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.786259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.786402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.786428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.786585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.786614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.786845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.786898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.787137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.787178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.787324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.787350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.787501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.787544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.787685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.787728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.787895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.787937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.788041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.788066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.788196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.788221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.788344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.788382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.788540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.788571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.788721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.788753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.788974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.789026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.789185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.789210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.789350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.789375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.789476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.789503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.789635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.789660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.789819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.789844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.789984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.790026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.790205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.790230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.790347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.790372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.790481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.790506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.790600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.790624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.790752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.790782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.790904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.790947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.791115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.791140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.791249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.791274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.774 [2024-07-24 19:55:06.791409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.774 [2024-07-24 19:55:06.791435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.774 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.791571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.791596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.791741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.791766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.791913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.791941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.792110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.792138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.792294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.792320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.792464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.792503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.792624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.792653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.792790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.792818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.792961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.793004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.793112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.793140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.793299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.793331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.793468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.793494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.793632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.793658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.793762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.793788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.793924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.793951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.794052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.794079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.794212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.794237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.794433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.794461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.794606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.794635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.794752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.794782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.794935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.794962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.795120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.795146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.795319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.795364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.795498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.795542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.795707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.795750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.795895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.795941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.796056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.796083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.796246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.796272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.796407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.796432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.796554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.796582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.796730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.796757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.796910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.796937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.797111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.797137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.797230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.797260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.797369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.797394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.797553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.797581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.797706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.797747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.797868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.797903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.798076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.798104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.798218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.798266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.798419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.798444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.798574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.798602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.798736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.798764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.798937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.798965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.799160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.799209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.775 qpair failed and we were unable to recover it. 00:25:49.775 [2024-07-24 19:55:06.799328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.775 [2024-07-24 19:55:06.799355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.799465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.799491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.799643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.799687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.799920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.799969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.800126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.800152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.800290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.800317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.800455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.800479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.800637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.800664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.800803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.800831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.800984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.801012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.801150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.801177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.801350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.801376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.801504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.801547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.801690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.801717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.801920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.801947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.802088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.802116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.802232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.802267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.802389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.802414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.802549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.802574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.802732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.802761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.802913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.802942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.803045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.803073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.803216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.803253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.803373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.803399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.803499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.803525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.803646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.803675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.803841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.803869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.804001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.804042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.804225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.804260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.804386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.804412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.804547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.804588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.804709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.804737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.804905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.804932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.805077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.805106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.805261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.805303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.805415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.805441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.805581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.805607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.805744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.805769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.776 qpair failed and we were unable to recover it. 00:25:49.776 [2024-07-24 19:55:06.805935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.776 [2024-07-24 19:55:06.805960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.806061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.806088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.806201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.806226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.806367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.806391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.806497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.806523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.806624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.806648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.806761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.806785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.806923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.806963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.807132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.807160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.807321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.807347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.807456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.807481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.807617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.807641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.807786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.807819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.807953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.807994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.808108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.808135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.808313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.808338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.808470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.808494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.808629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.808653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.808817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.808842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.809009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.809038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.809155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.809182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.809305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.809331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.809451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.809476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.809578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.809603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.809738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.809766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.809940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.809968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.810087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.810115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.810291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.810316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.810459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.810484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.810636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.810663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.810898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.810926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.811096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.811123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.811311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.811337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.811493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.811518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.811672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.811700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.811883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.811913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.812044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.812070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.812204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.812230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.812333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.812358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.812460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.812485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.812608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.812632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.812749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.812778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.812901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.812943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.813089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.813117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.777 qpair failed and we were unable to recover it. 00:25:49.777 [2024-07-24 19:55:06.813257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.777 [2024-07-24 19:55:06.813300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.813435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.813461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.813639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.813666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.813783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.813811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.813954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.813984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.814171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.814196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.814354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.814380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.814543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.814567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.814703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.814727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.814863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.814888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.815028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.815054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.815193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.815217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.815385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.815411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.815546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.815572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.815699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.815724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.815841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.815869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.816022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.816047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.816176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.816218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.816355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.816381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.816517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.816542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.816679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.816722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.816859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.816888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.817015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.817039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.817176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.817202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.817312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.817339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.817432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.817457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.817586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.817612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.817779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.817804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.817908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.817933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.818078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.818103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.818234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.818272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.818405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.818435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.818543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.818570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.818699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.818728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.818860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.818886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.818995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.819021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.819155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.819181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.819317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.819344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.819498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.819526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.819647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.819675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.819856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.819881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.819986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.820011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.820113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.778 [2024-07-24 19:55:06.820138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.778 qpair failed and we were unable to recover it. 00:25:49.778 [2024-07-24 19:55:06.820267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.820293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.820393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.820418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.820603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.820645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.820779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.820804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.820934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.820975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.821146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.821174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.821297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.821322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.821420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.821445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.821629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.821656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.821819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.821844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.821948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.821973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.822068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.822092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.822226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.822258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.822371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.822397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.822526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.822551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.822689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.822716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.822873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.822898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.823056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.823081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.823215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.823246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.823379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.823404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.823505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.823531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.823690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.823715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.823818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.823843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.823997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.824025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.824176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.824202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.824339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.824364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.824492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.824533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.824674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.824699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.824832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.824861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.825003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.825031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.825186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.825211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.825347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.825372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.825484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.825510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.825639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.825664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.825795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.825838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.826025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.826051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.826181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.826207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.826349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.826375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.826507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.826534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.779 qpair failed and we were unable to recover it. 00:25:49.779 [2024-07-24 19:55:06.826677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.779 [2024-07-24 19:55:06.826703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.826863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.826906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.827023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.827065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.827224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.827267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.827426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.827451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.827613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.827638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.827742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.827767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.827893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.827917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.828055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.828080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.828207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.828232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.828373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.828398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.828526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.828554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.828674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.828699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.828806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.828831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.829019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.829044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.829188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.829213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.829353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.829379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.829532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.829560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.829707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.829732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.829890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.829915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.830015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.830040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.830138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.830162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.830265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.830292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.830439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.830467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.830618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.830645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.830753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.830779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.830933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.830957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.831113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.831139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.831321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.831350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.831489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.831522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.831679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.831704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.831834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.831876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.832049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.832077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.832228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.832259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.832390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.832433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.832548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.832577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.832733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.832760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.780 [2024-07-24 19:55:06.832891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.780 [2024-07-24 19:55:06.832932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.780 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.833051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.833079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.833226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.833261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.833412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.833436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.833541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.833567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.833716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.833741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.833877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.833919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.834060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.834088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.834212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.834263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.834386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.834411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.834540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.834565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.834671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.834698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.834829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.834855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.835022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.835047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.835207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.835232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.835422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.835451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.835589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.835617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.835741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.835767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.835923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.835948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.836125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.836153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.836314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.836340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.836446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.836471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.836615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.836640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.836808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.836834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.836991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.837035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.837184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.837212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.837349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.837375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.837481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.837506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.837672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.837698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.837830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.837856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.838032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.838060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.838182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.838208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.838343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.838375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.838509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.838551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.838694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.838722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.838886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.838912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.839093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.839121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.839272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.839300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.839448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.839473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.839609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.839634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.839771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.839798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.839973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.839998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.840099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.840125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.840287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.840317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.840463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.840489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.781 [2024-07-24 19:55:06.840602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.781 [2024-07-24 19:55:06.840627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.781 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.840794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.840822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.841002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.841027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.841185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.841226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.841385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.841413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.841588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.841613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.841800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.841828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.841992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.842020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.842135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.842160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.842263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.842289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.842449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.842474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.842575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.842600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.842761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.842786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.842913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.842937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.843080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.843105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.843315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.843340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.843570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.843595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.843732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.843757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.843908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.843937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.844082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.844110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.844298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.844324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.844441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.844470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.844641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.844669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.844829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.844854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.845005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.845033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.845141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.845169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.845293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.845318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.845445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.845473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.845657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.845682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.845810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.845835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.845938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.845964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.846058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.846084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.846215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.846240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.846342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.846367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.846501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.846525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.846653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.846678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.846783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.846808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.846949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.846977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.847130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.847155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.847291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.847317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.847456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.847497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.847663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.847690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.782 qpair failed and we were unable to recover it. 00:25:49.782 [2024-07-24 19:55:06.847796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.782 [2024-07-24 19:55:06.847822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.847980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.848005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.848110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.848136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.848271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.848297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.848460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.848485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.848648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.848674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.848782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.848808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.848942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.848969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.849100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.849125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.849281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.849324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.849487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.849512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.849613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.849638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.849773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.849798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.849932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.849957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.850065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.850090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.850224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.850272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.850388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.850416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.850604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.850630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.850732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.850777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.850887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.850915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.851050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.851076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.851182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.851207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.851391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.851420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.851545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.851572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.851733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.851758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.851892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.851921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.852060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.852085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.852217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.852250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.852377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.852406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.852529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.852555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.852666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.852691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.852799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.852825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.852955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.852981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.853113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.853138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.853277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.853304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.853439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.853464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.853571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.853597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.853707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.853733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.853866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.853892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.854039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.854065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.854228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.854259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.854372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.854397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.854525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.854550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.854678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.783 [2024-07-24 19:55:06.854706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.783 qpair failed and we were unable to recover it. 00:25:49.783 [2024-07-24 19:55:06.854852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.854877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.855007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.855033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.855170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.855195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.855328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.855354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.855492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.855518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.855649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.855676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.855827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.855852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.855977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.856003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.856197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.856225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.856369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.856394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.856523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.856548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.856684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.856709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.856815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.856840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.856971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.856997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.857151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.857180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.857339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.857365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.857469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.857494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.857620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.857650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.857773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.857799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.857905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.857930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.858055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.858080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.858214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.858249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.858361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.858386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.858511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.858536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.858667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.858692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.858823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.858848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.858956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.858981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.859091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.859117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.859272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.859298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.859393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.859418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.859549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.859574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.859701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.859726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.859900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.859928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.860075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.860100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.860200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.860225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.860422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.860449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.860568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.860595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.860754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.860795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.860939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.860967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.861112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.861137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.861247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.861274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.784 qpair failed and we were unable to recover it. 00:25:49.784 [2024-07-24 19:55:06.861376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.784 [2024-07-24 19:55:06.861403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.861508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.861533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.861675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.861700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.861808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.861834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.861968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.861993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.862120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.862144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.862300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.862329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.862459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.862484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.862619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.862643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.862779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.862803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.862912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.862936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.863098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.863123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.863313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.863341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.863489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.863514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.863642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.863683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.863852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.863880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.864011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.864035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.864138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.864162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.864287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.864315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.864474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.864498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.864654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.864684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.864791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.864816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.864919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.864945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.865056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.865081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.865236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.865266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.865375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.865400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.865535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.865560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.865681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.865710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.865841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.865866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.866025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.866051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.866191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.866217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.866338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.866363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.866464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.866490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.866644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.866672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.866802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.866827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.866961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.866987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.867120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.867145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.867268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.867293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.867426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.867468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.867617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.867645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.867767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.867791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.868004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.868031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.868197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.868225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.868364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.785 [2024-07-24 19:55:06.868389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.785 qpair failed and we were unable to recover it. 00:25:49.785 [2024-07-24 19:55:06.868518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.868543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.868725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.868752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.868880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.868904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.869040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.869067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.869217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.869252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.869402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.869426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.869590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.869616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.869780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.869804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.869908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.869933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.870046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.870071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.870231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.870263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.870400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.870424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.870551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.870576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.870735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.870759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.870865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.870889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.871022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.871046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.871201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.871235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.871406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.871431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.871592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.871616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.871749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.871774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.871888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.871914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.872008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.872032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.872163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.872189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.872341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.872367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.872473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.872516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.872698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.872723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.872857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.872882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.872983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.873008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.873143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.873168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.873274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.873300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.873462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.873487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.873598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.873622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.873727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.873753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.873853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.873877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.874049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.874076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.874214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.874239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.874380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.874404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.874516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.874540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.874680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.874704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.874804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.874829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.875014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.875042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.875223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.875256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.875406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.875433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.875548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.875576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.875697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.875720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.875878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.875918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.876053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.876080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.876227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.876260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.876392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.876416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.876572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.876600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.876778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.876803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.876910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.876951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.877069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.877095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.877316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.877341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.877447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.786 [2024-07-24 19:55:06.877473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.786 qpair failed and we were unable to recover it. 00:25:49.786 [2024-07-24 19:55:06.877646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.877673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.877860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.877889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.878020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.878047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.878174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.878202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.878365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.878391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.878524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.878549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.878678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.878702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.878808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.878833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.878965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.878989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.879094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.879119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.879237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.879271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.879420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.879444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.879546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.879571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.879674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.879699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.879813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.879838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.879975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.879999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.880108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.880133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.880265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.880291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.880448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.880477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.880605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.880630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.880726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.880750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.880874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.880902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.881022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.881046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.881181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.881205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.881344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.881385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.881540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.881564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.881731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.881758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.881932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.881957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.882092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.882116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.882213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.882237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.882366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.882393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.882519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.882543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.882647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.882671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.882794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.882821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.882953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.882977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.883107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.883132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.883286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.883314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.883496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.787 [2024-07-24 19:55:06.883520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.787 qpair failed and we were unable to recover it. 00:25:49.787 [2024-07-24 19:55:06.883650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.883674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.883820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.883847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.884002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.884028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.884159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.884207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.884379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.884404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.884532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.884557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.884714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.884738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.884899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.884925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.885062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.885086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.885250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.885276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.885371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.885396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.885533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.885558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.885666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.885706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.885860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.885887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.886043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.886068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.886252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.886279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.886421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.886449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.886631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.886656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.886761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.886786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.886911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.886938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.887062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.887087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.887222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.887266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.887403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.887433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.887622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.887647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.887796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.887823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.888005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.888032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.888220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.888253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.888405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.888433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.888576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.888604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.888759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.888783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.888922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.888947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.889051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.889075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.889202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.889227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.889343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.889368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.889523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.889547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.889651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.889676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.889805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.889829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.889990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.890018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.890144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.890169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.890269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.890294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.890482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.890510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.890647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.890671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.890781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.890805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.890936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.890961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.891064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.891088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.891193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.891218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.891384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.891412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.788 qpair failed and we were unable to recover it. 00:25:49.788 [2024-07-24 19:55:06.891574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.788 [2024-07-24 19:55:06.891599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.891774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.891800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.891939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.891967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.892093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.892119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.892251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.892275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.892400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.892428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.892578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.892604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.892713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.892737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.892880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.892905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.893012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.893036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.893150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.893175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.893332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.893361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.893518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.893542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.893644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.893669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.893780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.893805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.893962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.893987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.894136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.894165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.894284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.894311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.894430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.894456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.894592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.894616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.894744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.894771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.894933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.894959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.895074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.895116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.895290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.895322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.895473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.895498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.895620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.895662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.895776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.895803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.895926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.895951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.896081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.896105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.896211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.896236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.896376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.896400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.896535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.896560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.896687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.896714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.896847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.896871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.896997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.897022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.897146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.897173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.897317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.897343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.897475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.897514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.897628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.897655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.897813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.897837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.897967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.898008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.898128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.898156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.898317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.898343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.898476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.898500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.898655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.898682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.898863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.898888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.899017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.899042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.899191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.899220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.899360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.899387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.899489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.899515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.899707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.899735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.899859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.899886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.900018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.900042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.900237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.900267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.900372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.900397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.900527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.900552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.900677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.900706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.900861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.900885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.900997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.789 [2024-07-24 19:55:06.901022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.789 qpair failed and we were unable to recover it. 00:25:49.789 [2024-07-24 19:55:06.901154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.901179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.901308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.901333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.901441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.901467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.901574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.901600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.901736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.901767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.901916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.901946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.902137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.902162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.902258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.902284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.902415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.902440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.902601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.902629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.902747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.902772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.902903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.902928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.903096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.903121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.903285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.903310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.903412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.903454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.903610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.903638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.903756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.903781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.903937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.903961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.904106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.904133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.904316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.904342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.904472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.904496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.904627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.904652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.904754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.904778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.904902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.904927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.905083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.905108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.905284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.905309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.905448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.905472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.905611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.905636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.905770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.905794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.905894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.905918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.906082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.906107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.906273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.906298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.906402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.906427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.906534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.906559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.906661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.906685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.906820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.906844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.907007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.907031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.907165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.907190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.907324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.907350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.907508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.907547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.907711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.907737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.907867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.907908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.908047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.908074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.908220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.908266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.908423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.908455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.908640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.908664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.908770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.908794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.908951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.908992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.909108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.909135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.909291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.909316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.909447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.909472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.909634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.909660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.909791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.909815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.909920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.909944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.910101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.910128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.910291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.910316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.910440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.910465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.910656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.910683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.910845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.910871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.911012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.911039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.911153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.790 [2024-07-24 19:55:06.911181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.790 qpair failed and we were unable to recover it. 00:25:49.790 [2024-07-24 19:55:06.911356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.911381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.911595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.911622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.911745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.911773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.911890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.911915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.912074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.912116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.912246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.912276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.912434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.912459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.912561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.912587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.912769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.912797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.912919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.912943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.913055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.913080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.913238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.913270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.913382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.913408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.913541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.913566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.913733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.913759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.913911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.913936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.914063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.914088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.914196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.914222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.914359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.914385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.914537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.914565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.914680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.914708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.914864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.914890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.915045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.915087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.915258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.915291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.915410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.915436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.915533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.915558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.915681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.915706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.915817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.915843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.915971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.915996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.916155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.916184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.916346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.916372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.916473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.916499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.916605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.916631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.916763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.916788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.916922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.916947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.917066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.917094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.917270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.917296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.917514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.917541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.917691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.917719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.917930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.917955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.918110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.918138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.918281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.918309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.918521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.918546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.918652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.918677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.918809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.918835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.918945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.918972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.919102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.919127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.919261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.919290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.919448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.919474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.919650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.919678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.919821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.919849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.919968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.919993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.920133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.920158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.920319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.920347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.920466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.791 [2024-07-24 19:55:06.920491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.791 qpair failed and we were unable to recover it. 00:25:49.791 [2024-07-24 19:55:06.920649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.920674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.920824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.920852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.920977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.921002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.921102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.921127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.921253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.921296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.921412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.921436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.921566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.921592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.921771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.921798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.921956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.921986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.922086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.922111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.922291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.922319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.922451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.922478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.922639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.922664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.922792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.922819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.923004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.923028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.923161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.923203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.923330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.923357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.923507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.923533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.923661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.923703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.923886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.923910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.924020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.924045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.924185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.924210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.924351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.924379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.924529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.924553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.924660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.924686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.924843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.924870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.925023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.925048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.925180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.925221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.925357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.925385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.925565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.925590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.925753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.925794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.925941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.925969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.926144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.926169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.926326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.926368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.926508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.926535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.926693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.926717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.926850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.926891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.927033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.927062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.927238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.927268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.927399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.927442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.927565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.927594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.927716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.927742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.927895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.927920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.928054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.928081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.928212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.928237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.928376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.928402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.928526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.928555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.928705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.928729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.928867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.928897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.929034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.929059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.929186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.929210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.929352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.929378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.929536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.929564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.929717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.929741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.929849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.929891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.930002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.930029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.930208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.930233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.930391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.930419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.930553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.930580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.930739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.792 [2024-07-24 19:55:06.930763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.792 qpair failed and we were unable to recover it. 00:25:49.792 [2024-07-24 19:55:06.930890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.930931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.931072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.931099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.931281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.931307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.931434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.931476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.931590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.931617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.931760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.931784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.931932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.931957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.932083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.932108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.932309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.932335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.932470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.932494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.932596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.932621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.932721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.932745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.932877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.932902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.933050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.933077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.933238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.933277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.933434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.933461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.933602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.933630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.933761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.933785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.933885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.933910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.934030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.934058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.934211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.934236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.934378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.934402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.934531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.934556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.934680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.934704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.934858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.934883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.935011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.935037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.935206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.935231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.935365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.935408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.935550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.935583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.935734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.935760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.935857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.935882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.936034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.936063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.936183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.936208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.936322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.936347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.936504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.936531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.936685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.936709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.936843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.936868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.937001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.937025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.937156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.937180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.937319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.937345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.937497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.937524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.937675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.937700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.937885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.937913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.938034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.938061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.938234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.938267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.938410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.938438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.938558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.938585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.938713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.938738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.938868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.938893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.939034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.939061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.939220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.939251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.939388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.939414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.939529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.939556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.939737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.939762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.939874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.939901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.940036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.940061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.793 qpair failed and we were unable to recover it. 00:25:49.793 [2024-07-24 19:55:06.940192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.793 [2024-07-24 19:55:06.940217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.940359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.940384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.940548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.940572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.940670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.940695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.940823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.940848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.940998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.941026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.941172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.941197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.941333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.941375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.941513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.941541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.941695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.941720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.941874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.941898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.942003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.942029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.942164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.942193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.942353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.942378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.942492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.942531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.942674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.942698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.942833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.942857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.942959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.942983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.943093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.943118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.943251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.943293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.943405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.943432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.943588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.943613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.943740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.943764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.943907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.943935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.944087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.944111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.944211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.944235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.944380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.944408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.944554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.944578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.944682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.944707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.944827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.944855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.944988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.945012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.945140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.945165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.945298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.945326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.945460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.945485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.945619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.945643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.945796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.945823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.945981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.946005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.946116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.946140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.946298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.946327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.946483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.946508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.946645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.946669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.946805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.946829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.946985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.947010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.947159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.947186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.947329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.947358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.947513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.947537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.947663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.947688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.947858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.947887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.948017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.948043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.948180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.948204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.948366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.794 [2024-07-24 19:55:06.948391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.794 qpair failed and we were unable to recover it. 00:25:49.794 [2024-07-24 19:55:06.948528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.948554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.948724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.948755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.948928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.948955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.949105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.949131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.949263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.949307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.949446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.949474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.949631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.949656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.949787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.949812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.949955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.949982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.950134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.950159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.950308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.950337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.950477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.950504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.950680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.950704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.950813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.950854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.950976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.951003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.951163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.951187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.951312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.951338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.951472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.951496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.951698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.951723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.951876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.951903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.952045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.952073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.952194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.952220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.952358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.952383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.952491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.952517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.952654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.952678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.952781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.952806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.952937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.952964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.953095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.953120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.953258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.953285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.953466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.953493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.953625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.953650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.953755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.953781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.953897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.953925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.954069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.954094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.954224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.954274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.954398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.954426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.954577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.954602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.954702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.954727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.954846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.954873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.955027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.955053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.955164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.955189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.955303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.955333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.955467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.955492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.955598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.955623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.955788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.955812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.955957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.795 [2024-07-24 19:55:06.955982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.795 qpair failed and we were unable to recover it. 00:25:49.795 [2024-07-24 19:55:06.956137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.956163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.956326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.956355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.956477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.956504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.956660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.956686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.956793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.956818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.956960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.956985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.957088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.957113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.957220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.957251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.957383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.957409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.957539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.957582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.957725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.957753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.957929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.957955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.958087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.958114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.958255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.958281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.958393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.958418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.958578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.958621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.958739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.958767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.958924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.958949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.959122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.959150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.959309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.959336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.959464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.959490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.959602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.959627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.959829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.959854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.959986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.960011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.960115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.960141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.960302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.960331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.960487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.960514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.960652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.960678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.960809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.960834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.960968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.960995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.961126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.961171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.961290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.961319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.961452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.961478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.961638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.961663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.961796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.961821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.961955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.961985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.962092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.962117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.962255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.962283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.962402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.962426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.962550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.962574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.962709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.962737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.962860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.962886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.963043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.963086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.963233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.963268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.963436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.963462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.963593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.963634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.963754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.963781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.963931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.796 [2024-07-24 19:55:06.963958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.796 qpair failed and we were unable to recover it. 00:25:49.796 [2024-07-24 19:55:06.964090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.964130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.964283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.964311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.964470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.964496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.964651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.964675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.964832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.964856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.964998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.965024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.965148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.965190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.965316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.965343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.965463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.965488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.965611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.965636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.965765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.965793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.965915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.965939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.966048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.966075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.966224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.966260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.966388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.966414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.966524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.966551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.966680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.966708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.966888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.966913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.967015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.967041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.967177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.967204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.967391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.967417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.967527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.967554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.967713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.967738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.967903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.967928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.968057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.968098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.968250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.968278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.968438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.968463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.968574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.968605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.968766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.968792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.968921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.968946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.969077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.969119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.969225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.969260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.969389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.969415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.969551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.969575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.969709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.969733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.969840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.969866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.970000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.970025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.970186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.970214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.970351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.970377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.970512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.970537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.970696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.970726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.970852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.970878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.971007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.971031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.971179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.971207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.971367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.971391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.971502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.971544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.971683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.797 [2024-07-24 19:55:06.971711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.797 qpair failed and we were unable to recover it. 00:25:49.797 [2024-07-24 19:55:06.971893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.971918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.972025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.972067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.972186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.972214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.972359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.972385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.972525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.972567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.972706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.972735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.972917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.972941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.973079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.973104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.973238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.973270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.973380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.973404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.973539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.973581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.973736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.973765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.973888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.973913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.974066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.974091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.974184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.974209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.974322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.974347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.974482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.974507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.974655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.974684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.974847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.974872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.974972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.974996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.975129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.975156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.975346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.975372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.975504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.975549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.975721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.975748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.975930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.975955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.976106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.976135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.976288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.976314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.976447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.976471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.976580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.976606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.976765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.976794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.976945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.976970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.977126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.977168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.977348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.977377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.977530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.977555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.977687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.977729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.977842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.977870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.978027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.978051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.978180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.978205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.978352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.978380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.978532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.978556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.978686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.978710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.978843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.978867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.979071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.979096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.979261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.979289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.979432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.798 [2024-07-24 19:55:06.979460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.798 qpair failed and we were unable to recover it. 00:25:49.798 [2024-07-24 19:55:06.979614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.979639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.979770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.979811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.979928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.979961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.980081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.980107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.980236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.980270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.980398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.980423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.980537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.980561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.980689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.980714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.980820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.980847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.980983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.981009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.981110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.981135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.981238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.981269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.981372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.981397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.981503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.981528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.981634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.981679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.981813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.981838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.981977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.982002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.982148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.982176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.982299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.982325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.982456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.982481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.982638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.982666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.982785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.982810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.982916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.982941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.983062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.983090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.983270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.983296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.983398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.983423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.983553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.983581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.983728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.983753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.983884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.983925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.984049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.984078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.984197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.984222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.984330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.984356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.984528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.984555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.984705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.984729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.984904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.984932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.985072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.985100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.985255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.985282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.985392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.985418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.985573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.985598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.985725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.985750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.985853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.985878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.986048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.986072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.986205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.986234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.986400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.986428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.986602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.986630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.986748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.986773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.986901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.986927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.987101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.799 [2024-07-24 19:55:06.987129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.799 qpair failed and we were unable to recover it. 00:25:49.799 [2024-07-24 19:55:06.987261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.987287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.987392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.987418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.987578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.987605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.987750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.987775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.987891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.987916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.988046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.988071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.988172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.988199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.988310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.988337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.988501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.988526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.988659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.988684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.988790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.988817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.988973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.989001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.989155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.989180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.989314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.989357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.989500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.989528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.989661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.989686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.989831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.989874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.989987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.990032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.990188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.990213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.990354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.990379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.990480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.990504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.990679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.990704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.990833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.990873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.990984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.991013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.991149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.991175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.991330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.991356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.991509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.991538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.991658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.991683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.991807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.991832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.991984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.992012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.992149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.992174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.992307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.992334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.992499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.992527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.992681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.992706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.992877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.992910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.993025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.993054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.993235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.993266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.993365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.993407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.993568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.993592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.993748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.993773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.993876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.993917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.994070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.994096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.994207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.994232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.994372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.994413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.994553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.994581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.994703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.994728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.800 qpair failed and we were unable to recover it. 00:25:49.800 [2024-07-24 19:55:06.994858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.800 [2024-07-24 19:55:06.994884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.995043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.995071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.995228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.995260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.995364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.995390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.995498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.995523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.995652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.995677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.995805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.995831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.996019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.996044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.996150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.996175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.996277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.996302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.996445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.996470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.996613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.996638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.996811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.996839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.996991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.997019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.997176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.997200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.997338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.997382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.997499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.997527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.997653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.997678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.997836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.997861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.997984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.998013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.998142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.998167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.998265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.998291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.998440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.998467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.998648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.998673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.998778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.998803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.998928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.998953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.999063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.999088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.999190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.999216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.999387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.999434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.999594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.999620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.999724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.999751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:06.999910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:06.999939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.000112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.000140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.000324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.000350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.000480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.000505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.000671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.000696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.000823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.000865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.001050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.001075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.001235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.001268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.001381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.001405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.001564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.001589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.001689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.001714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.001881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.001906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.002087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.002114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.002290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.002316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.002422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.002466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.002681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.002706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.002936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.002963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.003119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.003147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.003288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.003316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.003443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.801 [2024-07-24 19:55:07.003468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.801 qpair failed and we were unable to recover it. 00:25:49.801 [2024-07-24 19:55:07.003577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.003603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.003707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.003733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.003869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.003895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.004007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.004049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.004253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.004280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.004383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.004408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.004566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.004591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.004705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.004734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.004872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.004897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.005032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.005057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.005196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.005222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.005386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.005412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.005516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.005541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.005697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.005722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.005847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.005872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.005996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.006037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.006193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.006221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.006346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.006376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.006506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.006530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.006706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.006734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.006895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.006920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.007074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.007099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.007253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.007282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.007400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.007425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.007565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.007591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.007751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.007780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.007954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.007979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.008080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.008105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.008268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.008294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.008443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.008468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.008594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.008619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.008779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.008806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.008919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.008944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.009067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.009092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.009219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.009254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.009379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.009403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.009548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.009575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.009782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.009807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.009966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.009991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.010151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.010178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.010328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.010357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.010513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.010538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.010668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.010692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.010818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.010843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.010949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.010974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.011106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.011132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.011305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.011334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.011484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.011509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.011653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.011678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.011815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.011840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.012002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.012028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.012151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.012180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.012329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.012358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.802 [2024-07-24 19:55:07.012489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.802 [2024-07-24 19:55:07.012515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.802 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.012621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.012646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.012794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.012823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.012978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.013003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.013132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.013178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.013334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.013364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.013522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.013547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.013654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.013679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.013811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.013836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.013968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.013993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.014130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.014173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.014322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.014358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.014507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.014533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.014667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.014707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.014859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.014887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.015034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.015060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.015209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.015237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.015389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.015418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.015561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.015586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.015689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.015714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.015816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.015841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.015978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.016003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.016100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.016125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.016301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.016330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.016486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.016512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.016639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.016664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.016815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.016843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.017001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.017026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.017154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.017196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.017329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.017358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.017515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.017540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.017705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.017730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.017843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.017871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.018044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.018069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.018239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.018274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.018387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.018415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.018566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.018592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.018727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.018768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.018909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.018937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.019077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.019102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.019237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.019270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.019403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.019428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.019561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.019586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.019722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.019765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.019873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.019905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.020060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.020086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.020218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.020266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.020432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.020460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.020611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.020636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.020817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.020845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.020992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.021020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.021169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.021194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.021361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.021387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.021489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.021513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.021670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.021695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.021826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.021869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.021985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.022014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.022199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.022224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.022395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.022423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.803 qpair failed and we were unable to recover it. 00:25:49.803 [2024-07-24 19:55:07.022572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.803 [2024-07-24 19:55:07.022600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.022746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.022771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.022875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.022900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.023024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.023053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.023219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.023250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.023382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.023407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.023509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.023534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.023663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.023688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.023815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.023856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.023971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.024000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.024152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.024179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.024309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.024335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.024480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.024509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.024670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.024695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.024857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.024882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.025013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.025041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.025168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.025193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.025332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.025364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.025467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.025492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.025627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.025653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.025780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.025805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.025961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.026001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.026159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.026183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.026296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.026322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.026417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.026442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.026545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.026575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.026703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.026729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.026917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.026945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.027095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.027120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.027229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.027260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.027394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.027419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.027525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.027550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.027660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.027685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.027818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.027842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.027943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.027968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.028101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.028126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.028290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.028319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.028477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.028502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.028635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.028677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.028796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.028824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.028950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.028974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.029106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.029131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.029316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.029344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.029510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.029534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.029668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.804 [2024-07-24 19:55:07.029692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.804 qpair failed and we were unable to recover it. 00:25:49.804 [2024-07-24 19:55:07.029791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.029815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.029973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.029998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.030174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.030201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.030349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.030377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.030504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.030529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.030637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.030662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.030855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.030880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.031041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.031066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.031169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.031211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.031348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.031375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.031553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.031578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.031704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.031728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.031831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.031857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.031986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.032010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.032110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.032135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.032300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.032325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.032459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.032483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.032585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.032610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.032748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.032774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.032896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.032921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.033060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.033089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.033227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.033260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.033403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.033429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.033562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.033586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.033724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.033749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.033884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.033910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.034009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.034033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.034189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.034215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.034337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.034363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.034488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.034513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.034644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.034670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.805 qpair failed and we were unable to recover it. 00:25:49.805 [2024-07-24 19:55:07.034803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.805 [2024-07-24 19:55:07.034829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.035007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.035034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.035148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.035175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.035351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.035376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.035481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.035511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.035678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.035719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.035867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.035891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.036021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.036045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.036210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.036237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.036428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.036454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.036582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.036606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.036725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.036753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.036934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.036959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.037097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.037124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.037237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.037273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.037423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.037448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.037585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.037639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.037797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.037827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.037981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.038007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.038134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.038174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.038295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.038324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.038478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.038516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.038622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.038648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.038777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.038802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.038930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.038956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.039085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.039126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.039290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.039319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.039454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.039480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.039645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.039670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.039795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.039824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.039981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.040007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.040140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.040181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.040354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.806 [2024-07-24 19:55:07.040383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.806 qpair failed and we were unable to recover it. 00:25:49.806 [2024-07-24 19:55:07.040540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.040566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.040673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.040698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.040854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.040882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.041027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.041052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.041157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.041182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.041327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.041358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.041538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.041564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.041671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.041714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.041852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.041880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.042029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.042054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.042181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.042210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.042360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.042386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.042521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.042546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.042644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.042669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.042856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.042885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.043063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.043088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.043270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.043317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.043449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.043474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.043572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.043598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.043703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.043729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.043874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.043902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.044017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.044042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.044170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.044196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.044357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.044384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.044521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.044547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.044676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.044719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.044871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.044900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.045017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.045043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1285152 Killed "${NVMF_APP[@]}" "$@" 00:25:49.807 [2024-07-24 19:55:07.045173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.045199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.045382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.045409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.045507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:49.807 [2024-07-24 19:55:07.045533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.045669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.807 [2024-07-24 19:55:07.045694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.807 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:49.807 qpair failed and we were unable to recover it. 00:25:49.807 [2024-07-24 19:55:07.045825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.045851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.808 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.045985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.046011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@725 -- # xtrace_disable 00:25:49.808 addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:49.808 [2024-07-24 19:55:07.046148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.046174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.046338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.046364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.046497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.046523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.046680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.046705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.046807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.046834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.046967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.046992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.047160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.047203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.047390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.047418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.047559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.047585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.047693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.047719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.047856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.047886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.048018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.048043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.048201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.048227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.048348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.048373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.048540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.048570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.048747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.048801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.048976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.049005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.049157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.049182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.049292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.049321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.049428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.049455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.049592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.049617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.049715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.049740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 [2024-07-24 19:55:07.049843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.049867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@485 -- # nvmfpid=1285643 00:25:49.808 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:49.808 [2024-07-24 19:55:07.049997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.050024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@486 -- # waitforlisten 1285643 00:25:49.808 [2024-07-24 19:55:07.050201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.050229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # '[' -z 1285643 ']' 00:25:49.808 [2024-07-24 19:55:07.050380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.050407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b9 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.808 0 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.808 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local max_retries=100 00:25:49.808 [2024-07-24 19:55:07.050574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.808 [2024-07-24 19:55:07.050601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.808 qpair failed and we were unable to recover it. 00:25:49.809 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.809 [2024-07-24 19:55:07.050782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@841 -- # xtrace_disable 00:25:49.809 [2024-07-24 19:55:07.050811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:49.809 [2024-07-24 19:55:07.050966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.050994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.051118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.051145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.051305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.051331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.051444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.051470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.051604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.051629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.051758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.051802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.051953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.051981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.052137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.052162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.052267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.052297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.052401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.052425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.052537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.052561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.052705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.052746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.052864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.052891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.053021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.053046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.053182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.053208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.053385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.053411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.053522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.053546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.053680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.053705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.053826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.053853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.809 qpair failed and we were unable to recover it. 00:25:49.809 [2024-07-24 19:55:07.054029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.809 [2024-07-24 19:55:07.054054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.054151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.054176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.054337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.054362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.054522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.054546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.054686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.054714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.054833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.054861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.055017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.055042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.055200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.055250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.055380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.055405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.055560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.055585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.055731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.055760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.055878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.055905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.056096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.056121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.056271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.056300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.056455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.056481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.056593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.056618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.056756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.056782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.056902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.056931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.057111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.057136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.057282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.057309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.057426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.057453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.057587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.057612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.057755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.057780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.057938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.057967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.058133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.058159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.058316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.058342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.058464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.058490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.058613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.058638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.058771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.058814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.058969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.059002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.059163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.059189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.059347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.059386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.059529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.059556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.059688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.810 [2024-07-24 19:55:07.059714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.810 qpair failed and we were unable to recover it. 00:25:49.810 [2024-07-24 19:55:07.059809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.059834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.059954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.059982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.060131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.060156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.060314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.060340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.060466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.060491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.060616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.060641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.060775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.060817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.060963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.060991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.061120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.061145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.061251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.061277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.061383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.061409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.061571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.061596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.061719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.061748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.061892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.061919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.062067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.062092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.062202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.062230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.062401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.062440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.062584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.062612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.062790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.062819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.062967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.062996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.063142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.063167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.063342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.063381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.063524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.063557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.063694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.063720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.063853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.063878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.064003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.064034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.064188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.064213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.064345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.064384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.064570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.064600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.064749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.064775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.064913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.064939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.065049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.811 [2024-07-24 19:55:07.065077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.811 qpair failed and we were unable to recover it. 00:25:49.811 [2024-07-24 19:55:07.065193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.065219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.065349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.065387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.065530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.065562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.065714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.065741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.065873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.065898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.065998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.066024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.066135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.066160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.066294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.066320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.066456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.066481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.066612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.066638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.066731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.066757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.066887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.066916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.067060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.067086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.067188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.067214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.067368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.067407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.067541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.067568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.067679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.067721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.067832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.067864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.067992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.068018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.068175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.068199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.068353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.068378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.068486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.068513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.068648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.068674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.068834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.068864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.068988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.069015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.069123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.069150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.069313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.069340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.069449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.069475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.069581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.069606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.069712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.069738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.069870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.069897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.070014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.070056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.812 [2024-07-24 19:55:07.070220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.812 [2024-07-24 19:55:07.070257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.812 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.070395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.070421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.070555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.070597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.070738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.070766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.070910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.070936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.071070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.071096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.071196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.071222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.071337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.071362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.071481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.071508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.071687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.071714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.071867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.071892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.072023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.072063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.072207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.072252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.072376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.072402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.072546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.072571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.072743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.072769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.072944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.072969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.073101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.073127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.073229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.073260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.073407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.073433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.073609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.073635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.073761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.073786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.073907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.073938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.074074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.074099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.075134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.075167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.075341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.075368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.075535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.075561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.075717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.075743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.075875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.075901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.076009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.076036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.076148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.076174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.076345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.076371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.076473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.076498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.813 qpair failed and we were unable to recover it. 00:25:49.813 [2024-07-24 19:55:07.076626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.813 [2024-07-24 19:55:07.076651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.076781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.076805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.076918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.076944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.077082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.077108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.077207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.077232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.077407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.077432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.077543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.077572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.077676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.077701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.077852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.077876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.077999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.078025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.078160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.078185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.078311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.078337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.078446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.078472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.078598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.078624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.078788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.078813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.078943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.078968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.079128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.079153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.079307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.079332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.079469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.079494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.079634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.079659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.079807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.079833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.079966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.079991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.080147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.080172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.080311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.080337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.080470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.080495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.080622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.080647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.080761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.080787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.080917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.080942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.081040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.081065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.081166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.081191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.081354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.081394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.081512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.081538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.081677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.814 [2024-07-24 19:55:07.081713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.814 qpair failed and we were unable to recover it. 00:25:49.814 [2024-07-24 19:55:07.081852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.081878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.082017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.082043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.082150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.082176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.082298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.082324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.082452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.082478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.082589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.082616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.082724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.082749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.082855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.082881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.083014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.083040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.083208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.083236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.083373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.083399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.083503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.083528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.083673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.083698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.083790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.083816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.083925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.083950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.084055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.084080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.084246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.084272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.084405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.084431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.084541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.084566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.084694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.084720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.084864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.084889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.085032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.085058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.085199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.085224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.085332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.085357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.085504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.085529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.085681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.085706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.085830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.085856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.085988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.815 [2024-07-24 19:55:07.086013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.815 qpair failed and we were unable to recover it. 00:25:49.815 [2024-07-24 19:55:07.086121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.086147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.086306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.086331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.086435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.086462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.086602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.086627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.086786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.086811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.086942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.086969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.087092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.087117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.087275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.087303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.087441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.087466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.087587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.087612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.087742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.087768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.087890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.087916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.088048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.088074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.088201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.088230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.088407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.088432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.088558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.088584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.088695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.088721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.088829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.088855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.089018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.089044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.089198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.089224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.089343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.089368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.089526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.089560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.089688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.089714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.089875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.089901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.090014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.090039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.090200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.090235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.090368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.090393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.090529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.090564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.090693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.090717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.090849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.090874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.091008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.091034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.091145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.091170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.091277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.816 [2024-07-24 19:55:07.091303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.816 qpair failed and we were unable to recover it. 00:25:49.816 [2024-07-24 19:55:07.091439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.091464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.091566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.091591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.091697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.091722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.091867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.091892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.092006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.092031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.092176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.092201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.092372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.092398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.092537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.092566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.092679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.092704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.092805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.092830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.092942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.092973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.093102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.093127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.093275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.093302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.093429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.093454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.093591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.093616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.093748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.093775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.093892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.093918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.094073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.094099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.094231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.094263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.094424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.094450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.094561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.094585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.094729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.094758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.094888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.094913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.095038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.095063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.095160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.095185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.095323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.095349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.095488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.095527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.095760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.095798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.095958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.095983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.096128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.096153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.096290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.096316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.096444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.096469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.096630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.096656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.817 qpair failed and we were unable to recover it. 00:25:49.817 [2024-07-24 19:55:07.096784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.817 [2024-07-24 19:55:07.096809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.096945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.096971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.097113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.097142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.097282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.097308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.097467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.097492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.097634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.097659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.097805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.097830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.097942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.097967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.098071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.098096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.098228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.098265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.098332] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:25:49.818 [2024-07-24 19:55:07.098384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.098409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 [2024-07-24 19:55:07.098410] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.098587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.098625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.098773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.098800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.098904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.098928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.099068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.099097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.099199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.099224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.099371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.099396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.099515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.099552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.099699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.099724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.099858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.099884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.099993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.100018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.100130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.100157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.100266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.100293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.100405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.100431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.100566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.100592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.100727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.100753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.100901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.100940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.101048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.101078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.101190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.101216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.101337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.101365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.101476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.101503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.101627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.818 [2024-07-24 19:55:07.101655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.818 qpair failed and we were unable to recover it. 00:25:49.818 [2024-07-24 19:55:07.101788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.101814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.101956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.101983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.102110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.102135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.102267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.102294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.102423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.102448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.102579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.102616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.102753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.102778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.102901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.102926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.103046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.103072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.103196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.103251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.103374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.103400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.103509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.103536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.103685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.103712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.103842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.103868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.103991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.104024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.104161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.104187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.104334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.104361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.104516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.104541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.104678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.104704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.104811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.104837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.104970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.104997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.105126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.105152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.105261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.105294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.105450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.105475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.105608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.105633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.105780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.105812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.105938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.105962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.106085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.106110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.106248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.106274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.106407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.106431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.106538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.106571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.106678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.106702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.106835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.819 [2024-07-24 19:55:07.106860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.819 qpair failed and we were unable to recover it. 00:25:49.819 [2024-07-24 19:55:07.106960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.106984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.107115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.107142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.107251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.107277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.107416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.107441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.107570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.107595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.107699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.107724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.107855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.107881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.107991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.108016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.108134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.108159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.108289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.108315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.108441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.108465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.108573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.108598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.108733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.108758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.108915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.108941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.109075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.109101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.109266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.109292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.109433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.109458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.109586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.109610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.109756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.109780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.109890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.109915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.110048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.110074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.110179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.110204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.110341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.110367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.110504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.110528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.110670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.110694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.110800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.110825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.110945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.110970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.820 qpair failed and we were unable to recover it. 00:25:49.820 [2024-07-24 19:55:07.111103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.820 [2024-07-24 19:55:07.111128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.111279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.111304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.111433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.111462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.111578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.111603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.111731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.111756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.111912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.111936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.112108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.112132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.112263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.112288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.112423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.112449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.112556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.112581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.112716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.112742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.112872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.112898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.113028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.113052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.113168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.113193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.113372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.113397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.113504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.113528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.113686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.113711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.113850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.113874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.113983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.114009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.114109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.114133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.114253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.114280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.114407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.114433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.114575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.114601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.114739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.114765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.114866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.114890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.115039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.115064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.115222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.115253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.115392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.115416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.115522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.115551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.115701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.115737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.115871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.115896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.116052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.116077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.116217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.821 [2024-07-24 19:55:07.116249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.821 qpair failed and we were unable to recover it. 00:25:49.821 [2024-07-24 19:55:07.116358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.116383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.116514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.116538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.116660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.116693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.116834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.116860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.117011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.117037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.117195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.117220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.117374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.117414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.117553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.117580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.117688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.117714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.117850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.117882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.118039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.118067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.118203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.118230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.118384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.118411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.118549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.118574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.118675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.118701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.118810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.118841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.118958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.118983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.119099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.119129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.119267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.119292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.119421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.119445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.119573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.119599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.119757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.119781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.119926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.119950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.120072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.120100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.120259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.120284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.120390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.120415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:49.822 [2024-07-24 19:55:07.120546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.822 [2024-07-24 19:55:07.120571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:49.822 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.120683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.120708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.120821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.120846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.120946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.120971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.121086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.121113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.121218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.121251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.121375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.121400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.121512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.121537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.121659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.121685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.121803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.121828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.121935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.121962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.122069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.122094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.122264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.122293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.122417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.122443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.122551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.122585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.122694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.122723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.122832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.122859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.122977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.123003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.123108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.123144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.123257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.123282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.123407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.123431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.123531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.123566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.123676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.123700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.107 [2024-07-24 19:55:07.123828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.107 [2024-07-24 19:55:07.123858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.107 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.123964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.123989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.124083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.124109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.124209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.124235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.124374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.124401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.124504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.124528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.124661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.124686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.124791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.124824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.124968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.124993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.125097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.125123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.125294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.125320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.125448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.125474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.125584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.125609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.125710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.125734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.125890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.125915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.126019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.126045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.126980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.127009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.127177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.127203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.127321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.127348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.127484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.127510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.127659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.127684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.127816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.127842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.127971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.128005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.128118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.128143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.128282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.128308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.128443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.128467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.128589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.128614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.128732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.128756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.128905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.128930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.129070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.129094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.129250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.129275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.129384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.129410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.129516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.129551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.129664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.129689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.129810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.129834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.129973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.129998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.130134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.130159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.108 qpair failed and we were unable to recover it. 00:25:50.108 [2024-07-24 19:55:07.130292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.108 [2024-07-24 19:55:07.130318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.130426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.130452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.130564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.130590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.130693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.130723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.130851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.130876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.131001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.131035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.131135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.131160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.131321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.131347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.131482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.131507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.131622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.131647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.131790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.131815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.131910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.131935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.132126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.132152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.132291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.132318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.132477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.132502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.132614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.132638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.132775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.132800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.132942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.132967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.133093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.133118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.133221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.133259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.133393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.133418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.133557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.133581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.133707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.133732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.133864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.133889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.133990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.134017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.134149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.134174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.134281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.134306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.134443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.134468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.134572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.134597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.134803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.134837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.134972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.134997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.135138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.135163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.135280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.135305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.135400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.135424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.109 [2024-07-24 19:55:07.135550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.135576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.109 [2024-07-24 19:55:07.135705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.109 [2024-07-24 19:55:07.135730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.109 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.135859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.135884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.136015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.136039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.136148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.136174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.136290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.136317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.136994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.137023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.137186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.137212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.137361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.137388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.137552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.137580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.137693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.137718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.137878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.137903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.138054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.138078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.138190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.138215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.138401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.138441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.138559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.138585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.138744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.138771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.138915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.138941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.139045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.139073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.139206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.139233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.139365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.139391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.139530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.139560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.139721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.139747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.139893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.139919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.140030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.140055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.140248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.140289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.140417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.140456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.140558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.140584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.140698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.140724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.140858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.140883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.140992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.141016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.141145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.141170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.141330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.141368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.141514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.141549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.141661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.141687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.141796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.141822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.141988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.142027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.142171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.142199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.142340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.110 [2024-07-24 19:55:07.142368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.110 qpair failed and we were unable to recover it. 00:25:50.110 [2024-07-24 19:55:07.142474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.142498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.142636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.142662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.142779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.142804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.142909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.142936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.143075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.143100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.143203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.143228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.143354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.143381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.143484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.143510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.143649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.143675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.143798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.143824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.143936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.143962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.144107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.144133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.144270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.144306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.144437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.144475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.144634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.144660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.144794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.144820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.144950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.144976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.145105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.145131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.145293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.145320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.145444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.145470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.145638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.145664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.145768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.145796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.145928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.145954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.146092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.146118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.146260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.146296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.146429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.146454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.146585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.146610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.146727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.146752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.146885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.146912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.147042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.147069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.147200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.147226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.147378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.147404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.147509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.147534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.147692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.147717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.147826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.147852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.147955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.147982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.111 [2024-07-24 19:55:07.148120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.111 [2024-07-24 19:55:07.148147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.111 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.148247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.148278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.148405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.148431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.148571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.148596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.148700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.148725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.148844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.148871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.148971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.149006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.149166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.149191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.149342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.149368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.149500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.149525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.149657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.149682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.149810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.149836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.149939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.149966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.150083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.150108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.150240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.150272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.150397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.150423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.150530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.150555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.150721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.150746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.150875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.150900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.151017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.151042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.151170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.151195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.151347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.151372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.151500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.151525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.151631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.151668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.151808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.151833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.151940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.151965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.152108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.152133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.152226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.152259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.152394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.152419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.152527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.152553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.112 [2024-07-24 19:55:07.152661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.112 [2024-07-24 19:55:07.152686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.112 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.152788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.152813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.152916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.152941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.153072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.153097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.153200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.153225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.153370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.153396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.153524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.153549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.153680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.153706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.153841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.153866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.153999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.154026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.154167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.154193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.154335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.154361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.154465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.154494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.154609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.154635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.154739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.154764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.154923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.154948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.155056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.155082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.155249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.155275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.155411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.155436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.155539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.155564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.155700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.155731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.155850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.155875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.155983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.156010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.156143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.156169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.156294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.156321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.156466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.156491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.156602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.156627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.156761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.156786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.156890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.156915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.157054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.157080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.157248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.157297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.157406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.157433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.157580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.157605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.157735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.157761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.157866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.157893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.158000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.158026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.158164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.158192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.113 [2024-07-24 19:55:07.158302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.113 [2024-07-24 19:55:07.158327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.113 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.158458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.158484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.158627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.158657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.158759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.158786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.158893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.158919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.159062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.159090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.159193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.159220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.159371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.159409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.159550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.159578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.159715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.159741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.159853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.159880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.159995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.160021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.160154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.160180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.160326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.160353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.160466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.160493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.160651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.160676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.160821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.160847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.161005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.161031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.161171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.161196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.161347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.161372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.161508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.161534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.161643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.161668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.161770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.161796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.161913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.161938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.162100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.162125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.162229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.162263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.162411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.162436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.162569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.162594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.162694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.162720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.162875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.162914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.163058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.163085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.163195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.163221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.163332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.163358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.163495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.163520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.163641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.163666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.163841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.163866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.163975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.164000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.164113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.114 [2024-07-24 19:55:07.164138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.114 qpair failed and we were unable to recover it. 00:25:50.114 [2024-07-24 19:55:07.164301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.164328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.164435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.164460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.164602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.164628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.164759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.164784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.164915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.164945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.165081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.165108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.165268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.165294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.165430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.165455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.165581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.165607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.165741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.165766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.165869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.165896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.166006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.166031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.166165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.166190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.166305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.166331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.166436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.166463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.166608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.166633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.166767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.166792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.166903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.166928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.167053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.167078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.167218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.167251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.167390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.167415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.167529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.167554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.167673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.167699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.167839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.167864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.168025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.168051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.168184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.168210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.168340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.168365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.168476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.168502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.168621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.168647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.168753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.168779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.168932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.168957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.169081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.169119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.169254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.169281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.169391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.169416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.169545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.169570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.169693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.169719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.169881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.169906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.115 [2024-07-24 19:55:07.170012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.115 [2024-07-24 19:55:07.170038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.115 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.170172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.170197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.170311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.170338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.170467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.170492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.170630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.170655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.170780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.170804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.170940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.170964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.171138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.171186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.171303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.171330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.171438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.171464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.171619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.171645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.171747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.171774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.171932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.171958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.172108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.172135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.172271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.172298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.172412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.172441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.172550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.172576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.172703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.172728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.172871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.172897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.173010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.173037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.173198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.173225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.173381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.173407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.173564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.173589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.173718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.173743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.173876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.173901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.174039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.174064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.174198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.174224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.174353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.174378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.174537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.174562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.174673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.174698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.174849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.174875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.175006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.175032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.116 qpair failed and we were unable to recover it. 00:25:50.116 [2024-07-24 19:55:07.175149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.116 [2024-07-24 19:55:07.175174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.175294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.175320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.175455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.175486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.175628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.175653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.175784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.175809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.175923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.175947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.176074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.176104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.176200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.176224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.176400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.176426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.176567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.176593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.176700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.176725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.176837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.176862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.176978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.177004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.177114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.177140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.177249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.177275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.177406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.177432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.177557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.177582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.177740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.177766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.177867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.177892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.178002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.178029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.178132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.178157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.178297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.178324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.178462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.178487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.178600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.178626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.178730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.178755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.178910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.178935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.179044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.179068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.179170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.179196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.179310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.179337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.179456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.179481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.179629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.179654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.179769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.179794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.179928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.179955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.180099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.180125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.180256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.180298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.117 [2024-07-24 19:55:07.180460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.117 [2024-07-24 19:55:07.180485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.117 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.180634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.180659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.180798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.180823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.180978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.181002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.181104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.181130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.181234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.181264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.181373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.181400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.181565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.181605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.181730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.181755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.181857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.181883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.182018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.182043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.182172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.182199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.182330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.182356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.182465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.182490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.182633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.182659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.182765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.182789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.182905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.182931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.183069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.183094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.183223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.183256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.183387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.183413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.183515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.183539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.183680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.183705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.183810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.183836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.183971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.183997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.184131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.184155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.184255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.184291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.184448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.184473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.184617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.184643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.184753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.184779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.184916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.184942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.185052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.185077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.185211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.185237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.185384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.185409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.185534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.185559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.185683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.185708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.185854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.185879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.186010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.118 [2024-07-24 19:55:07.186036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.118 qpair failed and we were unable to recover it. 00:25:50.118 [2024-07-24 19:55:07.186162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.186187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.186338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.186364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.186478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.186503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.186637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.186662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.186793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.186820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.186934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.186960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.187071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.187099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.187234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.187265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.187396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.187420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.187558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.187584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.187713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.187742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.187900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.187925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.188035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.188060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.188194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.188219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.188345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.188373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.188481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.188506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.188622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.188648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.188781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.188806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.188935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.188961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.189065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.189092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.189205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.189230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.189258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:50.119 [2024-07-24 19:55:07.189401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.189427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.189558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.189584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.189721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.189751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.189857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.189883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.189985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.190012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.190129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.190154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.190327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.190353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.190463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.190488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.190606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.190631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.190785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.190811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.190916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.190942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.191076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.191102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.191208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.191236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.191357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.191383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.191492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.191518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.191623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.119 [2024-07-24 19:55:07.191658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.119 qpair failed and we were unable to recover it. 00:25:50.119 [2024-07-24 19:55:07.191778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.191805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.191942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.191968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.192099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.192125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.192291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.192317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.192451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.192477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.192595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.192621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.192752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.192778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.192910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.192937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.193071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.193097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.193218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.193250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.193395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.193421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.193525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.193550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.193658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.193684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.193794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.193819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.193955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.193981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.194116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.194141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.194262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.194288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.194421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.194447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.194579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.194604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.194712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.194738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.194898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.194923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.195031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.195056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.195206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.195232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.195366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.195391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.195527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.195553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.195715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.195741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.195846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.195877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.196012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.196038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.196150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.196177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.196322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.196348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.196479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.196504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.196651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.196677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.196793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.196818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.196925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.120 [2024-07-24 19:55:07.196951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.120 qpair failed and we were unable to recover it. 00:25:50.120 [2024-07-24 19:55:07.197084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.197112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.197265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.197291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.197427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.197452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.197561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.197586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.197730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.197755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.197895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.197920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.198056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.198082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.198220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.198278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.198418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.198444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.198579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.198605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.198748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.198775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.198881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.198907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.199065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.199090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.199201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.199227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.199386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.199412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.199534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.199559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.199695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.199721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.199856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.199882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.200038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.200063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.200213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.200238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.200396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.200422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.200554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.200580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.200729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.200754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.200873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.200899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.201028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.201053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.201162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.201187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.201315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.201341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.201478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.201503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.201618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.201643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.201800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.121 [2024-07-24 19:55:07.201825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.121 qpair failed and we were unable to recover it. 00:25:50.121 [2024-07-24 19:55:07.201959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.201984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.202142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.202168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.202309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.202339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.202498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.202524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.202660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.202686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.202854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.202900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.203070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.203096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.203206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.203232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.203385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.203410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.203543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.203589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.203728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.203772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.203899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.203925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.204033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.204058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.204203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.204229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.204393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.204419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.204525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.204550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.204679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.204729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.204872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.204897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.205059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.205085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.205188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.205213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.205333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.205358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.205475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.205500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.205624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.205649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.205810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.205836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.205939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.205964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.206097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.206123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.206275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.206304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.206421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.206446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.206566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.206592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.206707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.206732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.206871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.206897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.207030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.207055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.207182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.207207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.207322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.207348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.207456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.122 [2024-07-24 19:55:07.207480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.122 qpair failed and we were unable to recover it. 00:25:50.122 [2024-07-24 19:55:07.207653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.207678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.207808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.207832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.207940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.207965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.208073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.208098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.208218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.208248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.208396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.208421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.208536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.208561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.208672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.208702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.208859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.208884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.209021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.209046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.209184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.209209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.209341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.209366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.209475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.209500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.209635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.209661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.209767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.209791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.209889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.209915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.210051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.210076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.210203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.210228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.210370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.210396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.210531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.210555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.210690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.210717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.210860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.210885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.211011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.211036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.211150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.211176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.211340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.211366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.211473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.211499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.211605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.211630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.211739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.211764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.211904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.211929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.212056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.212081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.212218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.212249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.212360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.212385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.212492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.212518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.212629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.212654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.212789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.212815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.123 [2024-07-24 19:55:07.212945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.123 [2024-07-24 19:55:07.212969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.123 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.213081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.213106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.213247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.213274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.213411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.213437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.213543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.213568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.213702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.213727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.213830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.213855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.213957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.213982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.214100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.214126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.214281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.214307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.214447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.214472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.214606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.214631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.214763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.214793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.214898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.214922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.215030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.215055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.215189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.215214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.215361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.215387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.215544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.215570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.215734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.215759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.215890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.215915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.216050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.216076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.216205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.216231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.216379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.216406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.216567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.216592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.216702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.216728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.216876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.216901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.217037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.217063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.217203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.217230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.217382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.217408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.217543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.217568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.217708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.217733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.217867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.217894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.218002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.218029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.218144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.218171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.218314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.218341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.218448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.124 [2024-07-24 19:55:07.218475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.124 qpair failed and we were unable to recover it. 00:25:50.124 [2024-07-24 19:55:07.218588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.218613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.218752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.218777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.218909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.218935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.219064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.219106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.219286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.219326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.219452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.219478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.219625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.219650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.219788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.219813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.219924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.219949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.220110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.220134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.220265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.220304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.220436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.220462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.220569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.220595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.220759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.220785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.220918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.220942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.221043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.221069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.221197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.221226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.221370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.221395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.221504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.221532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.221665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.221691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.221801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.221826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.221957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.221984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.222089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.222114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.222266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.222301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.222430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.222455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.222560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.222585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.222695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.222721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.222854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.222879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.223008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.223034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.223133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.223158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.223308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.223337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.223448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.223472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.223602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.223626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.125 [2024-07-24 19:55:07.223783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.125 [2024-07-24 19:55:07.223808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.125 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.223974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.223999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.224098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.224122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.224251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.224276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.224391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.224415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.224542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.224575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.224705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.224734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.224868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.224895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.225052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.225077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.225184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.225209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.225332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.225371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.225489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.225515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.225682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.225709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.225854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.225880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.225990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.226015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.226149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.226175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.226320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.226345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.226478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.226503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.226634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.226660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.226817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.226841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.226944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.226968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.227098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.227123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.227251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.227299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.227440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.227476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.227596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.227622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.227777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.227802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.227911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.227937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.228067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.228093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.228226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.228260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.228408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.228433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.126 [2024-07-24 19:55:07.228557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.126 [2024-07-24 19:55:07.228582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.126 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.228743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.228768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.228869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.228894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.228994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.229018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.229174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.229199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.229341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.229367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.229506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.229531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.229649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.229676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.229784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.229809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.229935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.229961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.230093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.230118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.230230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.230284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.230397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.230424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.230563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.230588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.230745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.230771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.230901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.230926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.231071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.231095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.231231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.231266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.231399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.231424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.231534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.231560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.231667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.231697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.231827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.231852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.232011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.232037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.232138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.232163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.232264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.232290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.232419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.232445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.232547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.232572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.232692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.232718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.232822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.232847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.232990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.233016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.233167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.233206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.233340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.233379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.233489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.233515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.233671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.233696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.233817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.233843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.233953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.233977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.234080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.127 [2024-07-24 19:55:07.234105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.127 qpair failed and we were unable to recover it. 00:25:50.127 [2024-07-24 19:55:07.234212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.234236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.234345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.234370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.234489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.234514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.234652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.234677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.234814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.234839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.234946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.234971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.235066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.235090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.235206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.235231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.235376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.235404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.235549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.235576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.235687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.235714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.235848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.235874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.235974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.235999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.236131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.236157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.236296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.236322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.236432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.236457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.236592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.236617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.236759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.236785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.236904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.236931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.237061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.237086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.237218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.237253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.237403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.237428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.237531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.237555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.237680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.237710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.237843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.237868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.237965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.237990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.238101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.238125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.238251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.238290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.238405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.238431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.238578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.238603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.238731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.238756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.238893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.128 [2024-07-24 19:55:07.238918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.128 qpair failed and we were unable to recover it. 00:25:50.128 [2024-07-24 19:55:07.239021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.239046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.239140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.239165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.239269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.239295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.239426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.239451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.239558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.239585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.239724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.239750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.239861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.239887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.239992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.240017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.240175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.240201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.240365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.240391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.240499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.240525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.240641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.240667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.240812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.240838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.240970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.240995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.241097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.241122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.241249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.241275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.241411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.241436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.241566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.241591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.241700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.241729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.241850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.241875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.242004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.242029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.242170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.242194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.242339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.242365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.242474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.242499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.242633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.242657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.242797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.242822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.242925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.242951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.243110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.243135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.243275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.243301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.243414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.243439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.243550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.243575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.243704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.243729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.243862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.243887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.243991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.244016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.244172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.244198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.244314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.244340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.244465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.129 [2024-07-24 19:55:07.244490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.129 qpair failed and we were unable to recover it. 00:25:50.129 [2024-07-24 19:55:07.244621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.244645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.244752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.244777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.244910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.244936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.245075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.245100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.245204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.245229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.245369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.245394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.245505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.245530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.245624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.245649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.245751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.245776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.245925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.245951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.246074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.246099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.246218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.246265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.246379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.246405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.246547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.246572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.246702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.246728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.246866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.246891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.247019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.247044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.247205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.247231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.247355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.247380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.247519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.247544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.247665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.247690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.247850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.247875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.248009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.248035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.248142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.248171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.248308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.248335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.248445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.248471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.248593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.248618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.248736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.248762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.248880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.248905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.249044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.249070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.249171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.249196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.249322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.249347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.249484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.249510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.249648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.249674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.249776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.130 [2024-07-24 19:55:07.249801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.130 qpair failed and we were unable to recover it. 00:25:50.130 [2024-07-24 19:55:07.249963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.249988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.250132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.250157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.250263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.250288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.250388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.250413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.250576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.250601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.250705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.250729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.250855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.250880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.250985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.251009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.251124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.251148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.251288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.251316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.251457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.251483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.251588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.251614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.251708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.251733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.251892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.251917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.252032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.252058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.252170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.252196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.252301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.252327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.252452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.252477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.252580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.252605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.252705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.252730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.252851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.252875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.253028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.253052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.253183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.253208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.253345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.253370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.253500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.253525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.253660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.253685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.253787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.253813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.253927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.253951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.131 [2024-07-24 19:55:07.254057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.131 [2024-07-24 19:55:07.254082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.131 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.254213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.254238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.254339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.254364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.254468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.254492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.254625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.254650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.254785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.254810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.254939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.254964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.255067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.255092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.255225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.255261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.255399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.255424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.255570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.255595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.255729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.255755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.255891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.255917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.256029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.256058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.256189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.256215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.256333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.256359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.256495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.256520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.256646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.256671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.256777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.256802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.256904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.256928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.257058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.257082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.257223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.257253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.257363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.257388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.257489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.257515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.257650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.257675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.257801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.257826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.257960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.257985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.258100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.258126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.258232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.258263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.258395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.258421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.258552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.258577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.258736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.258761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.258893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.258919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.259040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.259065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.259201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.259226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.259346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.259371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.259476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.259500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.259659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.132 [2024-07-24 19:55:07.259684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.132 qpair failed and we were unable to recover it. 00:25:50.132 [2024-07-24 19:55:07.259809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.259834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.259964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.259989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.260097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.260128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.260264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.260290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.260404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.260429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.260573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.260598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.260734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.260759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.260868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.260893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.261056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.261081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.261211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.261235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.261385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.261410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.261513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.261538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.261652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.261677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.261806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.261831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.261957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.261982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.262106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.262131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.262239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.262269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.262398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.262423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.262559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.262584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.262690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.262715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.262871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.262896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.263021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.263046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.263229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.263266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.263376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.263401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.263497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.263522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.263674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.263699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.263810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.263834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.263966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.263991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.264104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.264131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.264260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.264285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.264424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.264450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.264561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.264587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.264720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.264745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.264870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.264895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.265014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.265039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.265165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.265190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.265292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.133 [2024-07-24 19:55:07.265317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.133 qpair failed and we were unable to recover it. 00:25:50.133 [2024-07-24 19:55:07.265452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.265477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.265635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.265660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.265784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.265809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.265941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.265966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.266124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.266149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.266277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.266302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.266435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.266465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.266601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.266627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.266770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.266794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.266918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.266943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.267047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.267073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.267218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.267249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.267357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.267383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.267486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.267511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.267622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.267647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.267747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.267772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.267877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.267904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.268064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.268090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.268218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.268249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.268397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.268422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.268579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.268605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.268747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.268772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.268884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.268909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.269007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.269033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.269170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.269195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.269300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.269326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.269456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.269481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.269609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.269634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.269768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.269793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.269895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.269921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.270030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.270055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.270159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.270185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.134 qpair failed and we were unable to recover it. 00:25:50.134 [2024-07-24 19:55:07.270284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.134 [2024-07-24 19:55:07.270310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.270469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.270494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.270629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.270655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.270815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.270841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.270974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.270999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.271161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.271186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.271298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.271323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.271436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.271462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.271621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.271646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.271781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.271806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.271911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.271936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.272068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.272095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.272205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.272230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.272345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.272370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.272487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.272521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.272670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.272696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.272828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.272853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.272980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.273005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.273111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.273137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.273248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.273274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.273387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.273413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.273518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.273543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.273649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.273674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.273784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.273809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.273909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.273934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.274060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.274087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.274192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.274217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.274325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.274350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.274463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.274490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.274598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.274624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.274786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.274811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.274917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.274942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.275075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.135 [2024-07-24 19:55:07.275100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.135 qpair failed and we were unable to recover it. 00:25:50.135 [2024-07-24 19:55:07.275228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.275272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.275389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.275414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.275578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.275603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.275740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.275765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.275921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.275946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.276104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.276129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.276289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.276315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.276449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.276474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.276624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.276650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.276795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.276825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.276993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.277019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.277149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.277175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.277284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.277310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.277403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.277428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.277535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.277559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.277661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.277687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.277823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.277848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.277949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.277975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.278101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.278127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.278266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.278292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.278429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.278454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.278570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.278595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.278740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.278765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.278925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.278950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.279090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.279116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.279225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.279255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.279387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.279412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.279554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.279579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.279689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.279715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.279852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.279877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.136 qpair failed and we were unable to recover it. 00:25:50.136 [2024-07-24 19:55:07.279978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.136 [2024-07-24 19:55:07.280003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.280137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.280164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.280319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.280344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.280457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.280483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.280619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.280644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.280748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.280773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.280925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.280950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.281087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.281113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.281222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.281252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.281362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.281387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.281494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.281519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.281664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.281690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.281817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.281842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.281973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.281999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.282169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.282194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.282343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.282368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.282478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.282515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.282673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.282698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.282856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.282880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.282983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.283009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.283152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.283193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.283363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.283391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.283529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.283555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.283655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.283681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.283810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.283835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.283974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.283999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.284158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.284184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.284389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.284415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.284514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.284549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.284684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.284710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.284812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.284838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.284964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.284991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.285128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.285154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.285285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.285312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.285452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.285477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.137 [2024-07-24 19:55:07.285606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.137 [2024-07-24 19:55:07.285631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.137 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.285758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.285783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.285939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.285965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.286128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.286156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.286265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.286291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.286396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.286421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.286551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.286576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.286679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.286704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.286807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.286832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.286946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.286971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.287080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.287105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.287269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.287294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.287404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.287430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.287542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.287567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.287676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.287702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.287835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.287860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.287985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.288011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.288125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.288150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.288256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.288283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.288404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.288430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.288585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.288610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.288752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.288778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.288889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.288914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.289041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.289066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.289200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.289226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.289346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.289376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.289518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.289543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.289646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.289671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.289827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.289851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.289987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.290011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.290118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.290143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.290272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.290297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.290506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.290531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.290675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.290700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.290849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.290874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.291013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.291038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.291259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.291284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.138 qpair failed and we were unable to recover it. 00:25:50.138 [2024-07-24 19:55:07.291406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.138 [2024-07-24 19:55:07.291431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.291549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.291574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.291721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.291760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.291910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.291937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.292047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.292073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.292205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.292231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.292370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.292397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.292540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.292565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.292698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.292724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.292853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.292880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.293029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.293055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.293198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.293224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.293366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.293393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.293507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.293533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.293691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.293716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.293844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.293874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.293994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.294020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.294176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.294202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.294318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.294344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.294504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.294529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.294636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.294663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.294799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.294826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.294962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.294988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.295091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.295119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.295234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.295268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.295378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.295405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.295542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.295568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.295663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.295688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.295794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.295820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.295936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.295962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.296081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.296107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.296227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.296261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.296398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.296425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.296583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.296609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.296710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.296736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.296877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.296906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.139 qpair failed and we were unable to recover it. 00:25:50.139 [2024-07-24 19:55:07.297012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.139 [2024-07-24 19:55:07.297039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.297202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.297227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.297336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.297363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.297497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.297524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.297653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.297679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.297787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.297813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.297940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.297971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.298079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.298105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.298233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.298266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.298400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.298426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.298539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.298565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.298665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.298691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.298822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.298850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.298985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.299010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.299145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.299170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.299276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.299302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.299417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.299442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.299570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.299594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.299691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.299715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.299848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.299873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.300008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.300033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.300170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.300195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.300326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.300352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.300455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.300481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.300592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.300617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.300750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.300776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.300999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.301027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.301188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.301213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.301329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.301355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.301457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.301483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.301587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.301613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.301746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.301771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.301943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.301969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.302082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.302109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.302219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.302251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.140 [2024-07-24 19:55:07.302384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.140 [2024-07-24 19:55:07.302409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.140 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.302524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.302551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.302669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.302694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.302809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.302836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.302938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.302964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.303083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.303109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.303212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.303238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.303372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.303398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.303528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.303553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.303660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.303685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.303794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.303818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.303922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.303948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.304058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.304086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.304191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.304218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.304367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.304393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.304528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.304553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.304686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.304712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.304820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.304846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.304952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.304979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.305074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.305100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.305205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.305230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.305370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.305395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.305522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.305547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.305680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.305705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.305815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.305841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.306004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.306029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.306134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.306161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.306281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.306309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.306416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.306443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.306614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.306639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.306739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.141 [2024-07-24 19:55:07.306764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.141 qpair failed and we were unable to recover it. 00:25:50.141 [2024-07-24 19:55:07.306888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.306914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.307022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.307047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.307207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.307233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.307370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.307396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.307508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.307533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.307667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.307693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.307840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.307865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.307969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.308000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.308147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.308175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.308289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.308315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.308444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.308470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.308631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.308656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.308758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.308783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.308910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.308935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.309043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.309068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.309169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.309194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.309298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.309324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.309430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.309455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.309568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.309593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.309704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.309731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.309897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.309925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.310070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.310096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.310198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.310224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.310347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.310372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.310482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.310508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.310641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.310667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.310774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.310801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.310911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.310937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.311068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.311093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.311229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.311261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.311377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.311402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.311495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.142 [2024-07-24 19:55:07.311520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.142 qpair failed and we were unable to recover it. 00:25:50.142 [2024-07-24 19:55:07.311623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.311650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.311809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.311834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.311937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.311966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.312097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.312123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.312246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.312274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.312385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.312411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.312514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.312541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.312674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.312700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.312814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.312840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.313055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.313080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.313217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.313248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.313413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.313438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.313544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.313569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.313672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.313697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.313808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.313833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.313925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.313951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.314091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.314117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.314219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.314250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.314388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.314414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.314545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.314571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.314672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.314698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.314801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.314826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.314926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.314952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.315062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.315089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.315257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.315285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.315385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.315411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.315522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.315547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.315655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.315680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.315813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.315839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.315994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.316019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.316126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.316153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.316269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.316296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.316431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.316456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.316570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.316595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.316694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.316719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.143 [2024-07-24 19:55:07.316824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.143 [2024-07-24 19:55:07.316850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.143 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.316959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.316987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.317132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.317157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.317286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.317312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.317447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.317473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.317612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.317636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.317772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.317797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.317934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.317961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.318093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.318118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.318224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.318257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.318394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.318419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.318557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.318582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.318679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.318704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.318834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.318859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.318989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.319014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.319118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.319143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.319252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.319280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.319416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.319442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.319574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.319601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.319708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.319733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.319841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.319867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.319978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.320007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.320117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.320144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.320259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.320285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.320420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.320445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.320590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.320615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.320711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.320736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.320871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.320896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.321033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.321058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.321191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.321216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.321353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.321379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.321515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.321540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.321699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.321724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.321858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.321884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.321985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.322010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.144 [2024-07-24 19:55:07.322119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.144 [2024-07-24 19:55:07.322145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.144 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.322253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.322279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.322416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.322440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.322549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.322574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.322703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.322728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.322855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.322880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.323011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.323037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.323165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.323190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.323304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.323330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.323465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.323490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.323596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.323622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.323758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.323783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.323891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.323917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.324027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.324052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.324236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.324274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.324381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.324407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.324508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.324533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.324632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.324657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.324771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.324796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.324931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.324956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.325079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.325104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.325246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.325272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.325372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.325397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.325498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.325523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.325660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.325685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.325812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.325837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.325981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.326006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.326143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.326172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.326287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.326312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.326430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.326455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.326566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.326592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.326738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.326763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.326881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.326921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.327034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.327061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.327174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.327201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.327310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.327336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.327447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.327473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.145 [2024-07-24 19:55:07.327575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.145 [2024-07-24 19:55:07.327601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.145 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.327706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.327732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.327828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.327854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.327950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.327975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.328090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.328117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.328223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.328254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.328360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.328385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.328491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.328517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.328542] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.146 [2024-07-24 19:55:07.328581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.146 [2024-07-24 19:55:07.328596] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.146 [2024-07-24 19:55:07.328608] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.146 [2024-07-24 19:55:07.328618] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.146 [2024-07-24 19:55:07.328652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.328677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.328787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.328812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.328838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:50.146 [2024-07-24 19:55:07.328921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.328948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 [2024-07-24 19:55:07.328868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.329106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.329136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.329268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.329295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.329412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.329438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.329587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.329613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.329740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.329766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.329903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.329929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.330069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.330095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.330208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.330234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.330352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.330378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.330536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.330562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.330678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.330704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.330813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.330838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.330975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.331001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.331111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.331137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.331248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.331277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.331434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.331459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.331574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.146 [2024-07-24 19:55:07.331600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.146 qpair failed and we were unable to recover it. 00:25:50.146 [2024-07-24 19:55:07.331736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.331761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.331873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.331898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.331862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:50.147 [2024-07-24 19:55:07.331897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:50.147 [2024-07-24 19:55:07.332005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.332032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.332138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.332165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.332288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.332315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.332434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.332459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.332594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.332619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.332720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.332745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.332859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.332884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.332984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.333012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.333144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.333170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.333299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.333326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.333438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.333463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.333577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.333603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.333781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.333806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.333910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.333937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.334073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.334099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.334253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.334279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.334382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.334408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.334514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.334539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.334698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.334723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.334826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.334852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.334958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.334984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.335083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.335109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.335207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.335232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.335384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.335410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.335516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.335546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.335646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.335671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.335811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.335836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.335949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.335974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.336104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.336130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.336232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.336280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.336389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.336414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.336517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.336543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.336659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.147 [2024-07-24 19:55:07.336685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.147 qpair failed and we were unable to recover it. 00:25:50.147 [2024-07-24 19:55:07.336788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.336813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.336913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.336938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.337043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.337068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.337197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.337222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.337339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.337365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.337472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.337498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.337595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.337620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.337723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.337749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.337852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.337878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.337978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.338003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.338109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.338137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.338282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.338308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.338410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.338435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.338543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.338568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.338672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.338698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.338807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.338833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.338981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.339008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.339169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.339194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.339296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.339329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.339433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.339458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.339563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.339589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.339716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.339742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.339836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.339861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.339992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.340017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.340148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.340173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.340281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.340308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.340416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.340442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.340551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.340577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.340682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.340709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.340814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.340841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.340950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.340976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.341100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.341126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.341258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.341284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.341395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.341420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.341534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.341559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.341702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.341727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.341855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.341880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.342009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.342034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.148 [2024-07-24 19:55:07.342152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.148 [2024-07-24 19:55:07.342177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.148 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.342284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.342310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.342422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.342447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.342554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.342580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.342681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.342706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.342838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.342863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.342980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.343005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.343109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.343134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.343235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.343266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.343424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.343449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.343578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.343603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.343707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.343733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.343868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.343893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.344002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.344027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.344179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.344205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.344317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.344343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.344447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.344474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.344576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.344602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.344739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.344764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.344856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.344883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.345007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.345046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.345212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.345258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.345385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.345414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.345523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.345549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.345667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.345695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.345830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.345857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.345969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.345994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.346133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.346159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.346271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.346299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.346409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.346434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.346559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.346585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.346689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.346716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.346820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.346847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.346961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.346987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.347138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.347164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.347302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.347328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.347475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.347500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.347605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.149 [2024-07-24 19:55:07.347630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.149 qpair failed and we were unable to recover it. 00:25:50.149 [2024-07-24 19:55:07.347753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.347780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.347888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.347914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.348029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.348055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.348163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.348188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.348314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.348341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.348446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.348471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.348617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.348655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.348807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.348833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.348941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.348966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.349079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.349106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.349220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.349251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.349371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.349397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.349501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.349526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.349655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.349681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.349807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.349832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.349940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.349965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.350083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.350108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.350208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.350232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.350360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.350386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.350492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.350517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.350629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.350654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.350761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.350785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.350894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.350919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.351053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.351079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.351182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.351207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.351361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.351386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.351531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.351556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.351693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.351718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.351830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.351856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.351957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.351984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.352087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.352113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.352233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.352265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.352364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.352389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.352493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.352519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.352648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.352674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.352804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.352829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.352967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.352992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.353137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.353162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.353294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.353320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.353429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.150 [2024-07-24 19:55:07.353454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.150 qpair failed and we were unable to recover it. 00:25:50.150 [2024-07-24 19:55:07.353566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.353592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.353691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.353716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.353867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.353892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.353999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.354024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.354132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.354157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.354265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.354292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.354411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.354438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.354542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.354567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.354662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.354687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.354792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.354817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.354925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.354951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.355074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.355115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.355261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.355289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.355427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.355453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.355560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.355586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.355689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.355714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.355825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.355851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.355953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.355980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.356125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.356150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.356288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.356314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.356414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.356440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.356546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.356571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.356672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.356697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.356800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.356825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.356933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.356963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.357061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.357086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.357195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.357223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.357374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.357415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.357532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.357559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.357668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.357694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.151 [2024-07-24 19:55:07.357815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.151 [2024-07-24 19:55:07.357841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.151 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.357973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.357999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.358103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.358128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.358227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.358260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.358367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.358393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.358504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.358530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.358648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.358675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.358790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.358816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.358968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.358996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.359105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.359131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.359238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.359270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.359389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.359414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.359550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.359576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.359686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.359712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.359843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.359869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.359987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.360013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.360131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.360156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.360264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.360290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.360400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.360425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.360521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.360546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.360646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.360671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.360778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.360810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.360921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.360947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.361079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.361104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.361205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.361231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.361409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.361435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.361564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.361589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.361693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.361718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.361867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.361892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.361995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.362023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.362162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.362187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.362328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.362355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.362457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.362483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.362589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.362616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.362728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.362754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.362864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.362891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.152 [2024-07-24 19:55:07.363005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.152 [2024-07-24 19:55:07.363031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.152 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.363132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.363157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.363268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.363294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.363450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.363475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.363576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.363601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.363736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.363762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.363895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.363921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.364026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.364051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.364159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.364186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.364293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.364320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.364425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.364451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.364585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.364611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.364709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.364739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.364842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.364868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.364991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.365018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.365134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.365161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.365273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.365299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.365406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.365432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.365536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.365562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.365662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.365688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.365821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.365847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.365951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.365977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.366078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.366104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.366203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.366229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.366459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.366486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.366630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.366656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.366774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.366799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.366901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.366927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.367020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.367046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.367184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.367213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.367337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.367364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.367478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.367504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.367616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.367641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.367754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.367779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.367875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.367901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.368017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.368044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.368159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.368185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.368365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.368391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.368502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.368529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.368652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.368686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.368798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.368826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.368945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.368971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.369129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.369154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.153 [2024-07-24 19:55:07.369301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.153 [2024-07-24 19:55:07.369327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.153 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.369430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.369455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.369560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.369586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.369722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.369747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.369849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.369874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.369982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.370008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.370120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.370146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.370252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.370278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.370390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.370416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.370514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.370539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.370655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.370684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.370838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.370864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.370971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.370996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.371119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.371145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.371323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.371349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.371478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.371503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.371613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.371638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.371743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.371769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.371870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.371894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.371990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.372016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.372112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.372137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.372275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.372302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.372408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.372433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.372549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.372576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.372687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.372713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.372819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.372845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.372998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.373024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.373125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.373150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.373275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.373303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.373406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.373432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.373534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.373560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.373667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.373692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.373789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.373815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.373922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.373948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.374052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.374080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.374190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.374217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.374337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.374367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.374478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.374503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.374610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.374635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.374745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.374770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.374877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.374903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.375019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.154 [2024-07-24 19:55:07.375044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.154 qpair failed and we were unable to recover it. 00:25:50.154 [2024-07-24 19:55:07.375177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.375203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.375305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.375331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.375442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.375467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.375575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.375600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.375697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.375723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.375828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.375854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.375964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.375990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.376120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.376145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.376260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.376286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.376384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.376408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.376515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.376540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.376678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.376704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.376811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.376836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.376932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.376957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.377115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.377140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.377280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.377305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.377405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.377430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.377582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.377608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.377722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.377747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.377849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.377875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.377996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.378022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.378123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.378148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.378254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.378280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.378401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.378427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.378536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.378561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.378687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.378712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.378809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.378834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.378937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.378962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.379070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.379095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.379247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.379273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.379383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.379409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.379514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.379540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.379644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.379669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.379776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.379801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.379903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.155 [2024-07-24 19:55:07.379928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.155 qpair failed and we were unable to recover it. 00:25:50.155 [2024-07-24 19:55:07.380026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.380056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.380167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.380192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.380336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.380377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.380504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.380532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.380642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.380668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.380805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.380832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.380937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.380963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.381069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.381096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.381226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.381265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.381370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.381396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.381532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.381557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.381662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.381689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.381803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.381829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.381933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.381959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.382085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.382125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.382272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.382299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.382406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.382432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.382534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.382559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.382670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.382695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.382815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.382842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.382947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.382972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.383109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.383136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.383251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.383277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.383411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.383438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.383553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.383578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.383686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.383711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.383869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.383898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.156 [2024-07-24 19:55:07.384084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-07-24 19:55:07.384118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.156 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.384265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.384292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.384407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.384434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.384544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.384570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.384678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.384705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.384843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.384869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.385006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.385031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.385131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.385157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.385261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.385287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.385425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.385452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.385547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.385572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.385723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.385749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.385886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.385912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.386015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.386041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.386148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.386174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.386319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.386345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.386449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.386474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.386571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.386596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.386696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.386722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.386829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.386854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.386953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.386978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.387080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.387105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.387208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.387233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.387352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.387380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.387509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.387535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.387669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.387696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.387832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.387858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.387972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.388003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.388138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.388164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.388273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.388325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.388440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.388466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.388572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.388597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.388700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.388726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.388858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.388883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.388991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.389017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.389149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.389174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.389278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-07-24 19:55:07.389303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.157 qpair failed and we were unable to recover it. 00:25:50.157 [2024-07-24 19:55:07.389404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.389429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.389529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.389554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.389729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.389755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.389853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.389878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.390000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.390028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.390141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.390166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.390302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.390328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.390438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.390464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.390602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.390628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.390737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.390762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.390872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.390898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.391005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.391030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.391139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.391164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.391302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.391330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.391435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.391460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.391558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.391584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.391717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.391742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.391851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.391880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.392013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.392039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.392172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.392197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.392347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.392373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.392504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.392530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.392629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.392654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.392758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.392783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.392941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.392966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.393063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.393088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.393190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.393215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.393324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.393349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.393452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.393477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.393589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.393614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.393761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.393786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.393891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.393916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.394046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.394071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.394173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.394198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.394308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.394335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.394437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.394463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.394589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-07-24 19:55:07.394615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.158 qpair failed and we were unable to recover it. 00:25:50.158 [2024-07-24 19:55:07.394721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.394747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.394891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.394916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.395044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.395069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.395200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.395225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.395364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.395390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.395487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.395512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.395648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.395672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.395789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.395818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.395953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.395979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.396075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.396100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.396202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.396227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.396337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.396362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.396461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.396486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.396590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.396615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.396711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.396736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.396854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.396879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.396985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.397010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.397135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.397160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.397260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.397286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.397390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.397415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.397522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.397547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.397653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.397679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.397792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.397817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.397945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.397970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.398068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.398093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.398199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.398225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.398345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.398370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.398483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.398509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.398611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.398636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.398761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.398787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.398888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.398914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.399049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.399074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.399179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.399204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.399317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.399342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.399450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.399476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.399592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.399617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.399720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.399745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.159 qpair failed and we were unable to recover it. 00:25:50.159 [2024-07-24 19:55:07.399860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.159 [2024-07-24 19:55:07.399885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.399991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.400017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.400128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.400153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.400266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.400292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.400402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.400427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.400522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.400547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.400648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.400674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.400778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.400803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.400903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.400928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.401041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.401066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.401224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.401255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.401364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.401394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.401501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.401526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.401630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.401656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.401763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.401790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.401953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.401978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.402084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.402109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.402214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.402239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.402383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.402410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.402511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.402536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.402638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.402664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.402773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.402799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.402901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.402926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.403024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.403050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.403157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.403182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.403319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.403345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.403451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.403476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.403587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.403612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.403708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.403733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.160 [2024-07-24 19:55:07.403829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.160 [2024-07-24 19:55:07.403854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.160 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.403985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.404011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.404120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.404145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.404283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.404309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.404441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.404466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.404582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.404607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.404706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.404731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.404833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.404858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.404973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.404998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.405097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.405126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.405235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.405267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.405403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.405428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.405579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.405622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.405766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.405793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.405896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.405922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.406050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.406076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.406208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.406233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.406372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.406398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.406504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.406531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.406634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.406659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.406762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.406787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.406895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.406920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.407034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.407059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.407172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.407198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.407318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.407344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.407494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.407519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.407650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.407675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.407809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.407834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.407971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.407998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.408105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.408131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.408239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.408270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.408378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.408403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.408532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.408557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.408654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.408679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.408809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.408834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.408933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.408957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.409102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.161 [2024-07-24 19:55:07.409141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.161 qpair failed and we were unable to recover it. 00:25:50.161 [2024-07-24 19:55:07.409272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.409301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.409404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.409430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.409561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.409587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.409688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.409713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.409819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.409845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.409952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.409978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.410076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.410101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.410228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.410264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.410373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.410399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.410508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.410533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.410688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.410713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.410817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.410842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.410974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.411000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.411117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.411142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.411253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.411280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.411393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.411419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.411555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.411580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.411677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.411702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.411841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.411866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.412040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.412066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.412195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.412220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.412371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.412397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.412532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.412558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.412671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.412700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.412815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.412841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.412936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.412962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.413065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.413095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.413200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.413226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.413332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.413358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.413459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.413484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.413584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.413609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.413717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.413743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.413841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.413867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.413998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.414023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.414131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.414156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.414284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.414310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.162 [2024-07-24 19:55:07.414423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.162 [2024-07-24 19:55:07.414449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.162 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.414551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.414576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.414681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.414706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.414810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.414835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.414943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.414968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.415097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.415123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.415257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.415282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.415384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.415409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.415544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.415569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.415676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.415701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.415802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.415827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.415931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.415957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.416093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.416118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.416224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.416254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.416368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.416396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.416509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.416534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.416638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.416665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.416773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.416803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.416908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.416934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.417037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.417063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.417200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.417226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.417350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.417376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.417482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.417509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.417610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.417635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.417740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.417765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.417873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.417899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.418016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.418041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.418171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.418195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.418306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.418331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.418477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.418502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.418633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.418658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.418762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.418787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.418917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.418942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.419076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.419101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.419229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.419260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.419358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.419383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.419484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.419509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.419640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.163 [2024-07-24 19:55:07.419665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.163 qpair failed and we were unable to recover it. 00:25:50.163 [2024-07-24 19:55:07.419770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.419796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.419932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.419958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.420065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.420090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.420224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.420254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.420395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.420421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.420523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.420548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.420651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.420677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.420818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.420844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.420947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.420974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.421078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.421103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.421207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.421232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.421338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.421364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.421495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.421535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.421682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.421709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.421805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.421832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.421947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.421972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.422109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.422136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.422258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.422285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.422387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.422413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.422524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.422551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.422687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.422713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.422810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.422838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.422952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.422978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.423074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.423101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.423206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.423232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.423378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.423404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.423539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.423564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.423699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.423726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.423832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.423858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.423953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.423979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.424095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.424120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.424230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.424261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.424393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.424419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.424522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.424553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.424719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.424745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.424863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.424890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.424998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.425026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.164 qpair failed and we were unable to recover it. 00:25:50.164 [2024-07-24 19:55:07.425130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.164 [2024-07-24 19:55:07.425157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.425264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.425291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.425418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.425443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.425544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.425569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.425676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.425701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.425800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.425828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.425973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.425998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.426137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.426163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.426289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.426315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.426421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.426447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.426551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.426577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.426705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.426730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.426829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.426855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.426958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.426985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.427090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.427116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.427227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.427257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.427363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.427390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.427519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.427544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.427649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.427674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.427797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.427823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.427919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.427944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.428091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.428116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.428213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.428239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.428374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.428403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.428539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.428565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.428692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.428717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.428829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.428854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.428960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.428985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.429108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.429133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.429239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.429271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.429406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.429432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.429532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.429558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.165 [2024-07-24 19:55:07.429653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.165 [2024-07-24 19:55:07.429678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.165 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.429812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.429837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.429932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.429958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.430106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.430133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.430270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.430296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.430407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.430432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.430527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.430552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.430687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.430712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.430845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.430871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.430972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.430997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.431130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.431155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.431269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.431295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.431399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.431425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.431536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.431561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.431662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.431687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.431842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.431867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.431972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.431998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.432135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.432160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.432260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.432286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.432391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.432416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.432521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.432546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.432656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.432681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.432789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.432813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.432945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.432969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.433074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.433099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.433228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.433258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.433362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.433387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.433486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.433511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.433650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.433676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.433776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.433801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.433930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.433955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.434057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.434082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.434189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.434219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.434343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.434381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.434534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.434561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.434671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.434698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.434802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.434828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.434940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.434965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.166 qpair failed and we were unable to recover it. 00:25:50.166 [2024-07-24 19:55:07.435065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.166 [2024-07-24 19:55:07.435091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.435194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.435221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.435361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.435386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.435490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.435515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.435658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.435683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.435790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.435815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.435913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.435939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.436041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.436069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.436190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.436217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.436351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.436377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.436482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.436507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.436619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.436645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.436775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.436800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.436913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.436940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.437044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.437070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.437202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.437227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.437344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.437369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.437498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.437523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.437623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.437648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.437823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.437850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.437953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.437978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.438088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.438115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.438217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.438257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.438424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.438450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.438579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.438605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.438735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.438760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.438910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.438935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.439061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.439086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.439197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.439225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.439346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.439373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.439510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.439536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.439648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.439674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.439778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.439804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.439932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.439957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.440052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.440078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.440181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.440206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.440320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.167 [2024-07-24 19:55:07.440346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.167 qpair failed and we were unable to recover it. 00:25:50.167 [2024-07-24 19:55:07.440453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.440479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.440640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.440665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.440772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.440797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.440898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.440923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.441075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.441100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.441249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.441275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.441377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.441402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.441533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.441558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.441692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.441717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.441819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.441844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.441977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.442003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.442110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.442141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.442254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.442280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.442383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.442408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.442546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.442571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.442677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.442702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.442833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.442858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.442986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.443011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.443128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.443167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.443292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.443320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.443428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.443453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.443588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.443614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.443752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.443777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.443882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.443908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.444040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.444066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.444185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.444211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.444324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.444350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.444480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.444506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.444618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.444643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.444773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.444798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.444909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.444934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.445041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.445065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.445171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.445197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.445314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.445342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.445443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.445469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.445597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.168 [2024-07-24 19:55:07.445622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.168 qpair failed and we were unable to recover it. 00:25:50.168 [2024-07-24 19:55:07.445730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.445755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.445857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.445882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.445990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.446020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.446158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.446184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.446326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.446352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.446465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.446490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.446626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.446651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.446758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.446784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.446884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.446911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.447015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.447041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.447172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.447197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.447341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.447367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.447497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.447522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.447625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.447650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.447755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.447781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.447885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.447911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.448053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.448078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.448179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.448205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.448325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.448352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.448487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.448513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.448645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.448670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.448799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.448824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.448930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.448955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.449056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.449081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.449178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.449203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.449315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.449341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.449438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.449464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.449566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.449591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.449729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.449754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.449852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.449877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.449980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.450005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.450137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.450162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.450263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.450289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.450403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.450428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.450538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.450564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.450694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.169 [2024-07-24 19:55:07.450720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.169 qpair failed and we were unable to recover it. 00:25:50.169 [2024-07-24 19:55:07.450831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.450856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.451015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.451040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.451137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.451163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.451262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.451287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.451390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.451416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.451516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.451541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.451649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.451674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.451818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.451844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.451951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.451976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.452099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.452125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.452228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.452258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.452361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.452386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.452495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.452522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.452682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.452707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.452806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.452830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.452926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.452951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.453061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.453086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.453190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.453215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.453315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.453340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.453447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.453472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.453582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.453611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.453707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.453732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.453868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.453893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.453996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.454021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.454135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.454173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.454324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.454351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.454461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.454487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.454593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.454620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.454761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.454786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.454917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.454943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.455055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.455083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.455222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.455253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.170 [2024-07-24 19:55:07.455359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.170 [2024-07-24 19:55:07.455384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.170 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.455491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.455516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.455627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.455652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.455756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.455781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.455881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.455907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.456039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.456064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.456168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.456193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.456301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.456334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.456454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.456483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.456593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.456618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.456720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.456745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.456849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.456875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.456986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.457012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.457115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.457141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.457274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.457300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.457401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.457431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.457548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.457575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.457713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.457738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.457848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.457873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.457977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.458002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.458109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.458134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.458262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.458289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.458401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.458427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.458538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.458563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.458693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.458718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.458818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.458843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.458946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.458970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.459084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.459124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.459250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.459277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.459395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.459420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.459557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.459583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.459706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.459731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.459842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.459868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.460005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.460032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.460137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.460163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.171 [2024-07-24 19:55:07.460272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.171 [2024-07-24 19:55:07.460297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.171 qpair failed and we were unable to recover it. 00:25:50.441 [2024-07-24 19:55:07.460399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-07-24 19:55:07.460424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.441 qpair failed and we were unable to recover it. 00:25:50.441 [2024-07-24 19:55:07.460530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-07-24 19:55:07.460555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.441 qpair failed and we were unable to recover it. 00:25:50.441 [2024-07-24 19:55:07.460653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-07-24 19:55:07.460678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.441 qpair failed and we were unable to recover it. 00:25:50.441 [2024-07-24 19:55:07.460787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-07-24 19:55:07.460812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.441 qpair failed and we were unable to recover it. 00:25:50.441 [2024-07-24 19:55:07.460915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-07-24 19:55:07.460940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.441 qpair failed and we were unable to recover it. 00:25:50.441 [2024-07-24 19:55:07.461062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-07-24 19:55:07.461103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.441 qpair failed and we were unable to recover it. 00:25:50.441 [2024-07-24 19:55:07.461262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-07-24 19:55:07.461294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.441 qpair failed and we were unable to recover it. 00:25:50.441 [2024-07-24 19:55:07.461414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.441 [2024-07-24 19:55:07.461440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.461557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.461582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.461706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.461732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.461842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.461867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.461975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.462002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.462134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.462158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.462271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.462296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.462396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.462421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.462527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.462551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.462652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.462678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.462788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.462813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.462928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.462953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.463056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.463081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.463183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.463208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.463313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.463338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.463432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.463457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.463552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.463576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.463677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.463703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.463815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.463841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.463954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.463978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.464119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.464144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.464246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.464273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.464380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.464405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.464512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.464537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.464643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.464669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.464775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.464800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.464916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.464948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.465052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.465077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.465177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.465203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.465322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.465348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.465461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.465487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.465592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.465618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.465726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.465751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.465851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.465876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.465996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.466022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.466180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.442 [2024-07-24 19:55:07.466208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.442 qpair failed and we were unable to recover it. 00:25:50.442 [2024-07-24 19:55:07.466316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.466342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.466439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.466464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.466560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.466584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.466718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.466748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:25:50.443 [2024-07-24 19:55:07.466886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.466913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.467018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.467044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@865 -- # return 0 00:25:50.443 [2024-07-24 19:55:07.467157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.467197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.467329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.467356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.467496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.467523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:25:50.443 [2024-07-24 19:55:07.467620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.467646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.467747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.467774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@731 -- # xtrace_disable 00:25:50.443 [2024-07-24 19:55:07.467910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.467935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.468060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.468100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.468250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.468278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.468384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.468411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.468517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.468543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.468706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.468732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.468881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.468907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.469053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.469078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.469210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.469236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.469366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.469393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.469495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.469521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.469630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.469656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.469756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.469783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.469900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.469925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.470032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.470057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.470188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.470213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.470341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.470368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.470471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.470501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.470616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.470642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.470779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.470808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.470916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.470942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.471038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.471065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.471174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.443 [2024-07-24 19:55:07.471202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.443 qpair failed and we were unable to recover it. 00:25:50.443 [2024-07-24 19:55:07.471328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.471354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.471453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.471479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.471587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.471614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.471723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.471749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.471861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.471887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.471990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.472014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.472123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.472151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.472266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.472299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.472419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.472445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.472560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.472585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.472694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.472719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.472816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.472841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.472946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.472972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.473091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.473130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.473240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.473275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.473401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.473426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.473533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.473558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.473671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.473697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.473839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.473864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.473973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.473998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce8c000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.474142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.474179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.474323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.474357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.474462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.474489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.474600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.474626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.474736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.474762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.474874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.474901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.475004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.475029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.475137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.475163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.475271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.475310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.475410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.475436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.475548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.475574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.475692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.475720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.475827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.475853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.475964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.475989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.476116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.444 [2024-07-24 19:55:07.476141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.444 qpair failed and we were unable to recover it. 00:25:50.444 [2024-07-24 19:55:07.476246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.476273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.476379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.476405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.476520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.476547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.476679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.476705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.476802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.476829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.476936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.476962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.477093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.477118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.477257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.477283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.477387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.477412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.477513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.477538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.477642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.477667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.477767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.477794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.477899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.477924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.478052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.478089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.478201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.478228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.478352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.478378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.478488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.478515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.478657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.478682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.478789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.478816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.478948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.478975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.479094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.479119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.479254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.479281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.479385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.479411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.479518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.479544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.479645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.479670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.479797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.479822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.479931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.479956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.480065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.480092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.480203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.480232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.480345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.480370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.480473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.480499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.480593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.480619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.480725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.480751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.480882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.480908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.481046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.481074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.481184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.481211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.481317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.445 [2024-07-24 19:55:07.481342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.445 qpair failed and we were unable to recover it. 00:25:50.445 [2024-07-24 19:55:07.481446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.481472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.481583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.481609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.481714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.481740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.481881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.481907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.482037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.482063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.482170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.482197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.482319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.482358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.482473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.482507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.482622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.482648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.482774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.482800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.482907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.482933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.483033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.483058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.483163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.483189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.483297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.483323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.483429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.483454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.483563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.483588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.483728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.483763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.483899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.483925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.484031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.484058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.484169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.484196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.484339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.484366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.484473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.484498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.484634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.484660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.484759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.484785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.484890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.484915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.485048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.485074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.485177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.485202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.485330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.485370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.485484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.446 [2024-07-24 19:55:07.485510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.446 qpair failed and we were unable to recover it. 00:25:50.446 [2024-07-24 19:55:07.485642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.485667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.485767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.485794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.485924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.485949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.486054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.486080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.486208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.486235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.486375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.486401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.486501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.486526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.486638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.486663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.486825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.486850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.486963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.486989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.487103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.487128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.487230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.487265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.487402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.487428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.487539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.487564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.487706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.487732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.487845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.487872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.487984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.488009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.488114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.488140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.488240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.488274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.488383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.488408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.488519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.488545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.488653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.488679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.488820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.488845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.488941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.488967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.489071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.489097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.489203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.489229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.489377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.489402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.489533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.489558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.489700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.489726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.489832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.489857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.490017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.490052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.490162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.490187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.490367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.490394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.490500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.490525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.490627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.490653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.447 [2024-07-24 19:55:07.490757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.447 [2024-07-24 19:55:07.490783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.447 qpair failed and we were unable to recover it. 00:25:50.447 [2024-07-24 19:55:07.490894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.490919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.491051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:50.448 [2024-07-24 19:55:07.491079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.491208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.491233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.491359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:50.448 [2024-07-24 19:55:07.491385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.491484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.448 [2024-07-24 19:55:07.491514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.491627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.491652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.491762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.491800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.491936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.491962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.492072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.492097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.492233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.492270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.492404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.492430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.492532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.492557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.492683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.492709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.492805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.492830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.492932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.492957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.493048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.493073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.493184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.493223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.493357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.493394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.493500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.493527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.493628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.493654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.493761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.493787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.493913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.493939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.494056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.494082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.494181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.494206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.494348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.494374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.494482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.494507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.494607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.494632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.494738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.494763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.494872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.494898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.495003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.495029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.495172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.495197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.495334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.495360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.495467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.495492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.495593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.495624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.495725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.495751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.495861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.495887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.448 qpair failed and we were unable to recover it. 00:25:50.448 [2024-07-24 19:55:07.495990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.448 [2024-07-24 19:55:07.496016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.496122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.496148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.496263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.496289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.496399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.496425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.496534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.496559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.496692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.496717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.496825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.496851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.496962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.496989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.497124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.497154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.497262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.497288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.497417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.497441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.497547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.497572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.497682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.497709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.497841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.497866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.497973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.497999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.498110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.498136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.498263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.498289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.498401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.498426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.498557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.498582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.498681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.498706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.498805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.498831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.498941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.498966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.499091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.499130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.499276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.499313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.499429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.499455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.499562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.499587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.499700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.499727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.499836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.499862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.499964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.499992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.500134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.500160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.500267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.500293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.500394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.500419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.500521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.500546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.500673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.500698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.500831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.500859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.500959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.500985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.501121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.501147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.449 [2024-07-24 19:55:07.501256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.449 [2024-07-24 19:55:07.501283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.449 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.501433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.501459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.501569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.501596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.501730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.501756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.501866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.501894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.502024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.502050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.502181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.502208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.502318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.502343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.502458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.502483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.502589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.502615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.502713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.502738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.502863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.502888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.503107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.503134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.503277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.503310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.503424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.503451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.503565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.503590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.503724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.503750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.503901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.503926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.504037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.504064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.504172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.504197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.504342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.504368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.504498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.504523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.504627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.504653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.504754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.504779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.504921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.504946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.505080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.505110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.505219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.505252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.505367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.505393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.505503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.505528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.505642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.505667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.505767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.505792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.505903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.505928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.506068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.506093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.506226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.506261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.506388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.506414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.506531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.506557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.506689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.450 [2024-07-24 19:55:07.506714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.450 qpair failed and we were unable to recover it. 00:25:50.450 [2024-07-24 19:55:07.506852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.506877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.506984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.507009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.507124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.507151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.507277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.507303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.507408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.507433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.507573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.507598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.507737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.507762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.507894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.507919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.508050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.508075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.508205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.508230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.508375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.508400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.508506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.508530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.508631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.508656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.508766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.508791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.508895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.508923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.509034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.509059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.509202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.509228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.509374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.509400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.509517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.509542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.509646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.509672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.509777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.509803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.509964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.509990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.510098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.510124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.510258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.510284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.510388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.510413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.510511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.510536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.510637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.510662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.510824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.510850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.510981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.511007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.451 [2024-07-24 19:55:07.511110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.451 [2024-07-24 19:55:07.511135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.451 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.511249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.511275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.511379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.511406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.511550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.511575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.511712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.511737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.511841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.511866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.511974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.512002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.512109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.512135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.512234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.512267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.512409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.512434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.512543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.512569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.512668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.512693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.512824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.512849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.512987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.513019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.513126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.513151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.513293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.513319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.513440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.513465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.513568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.513593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.513702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.513727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.513841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.513867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.513962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.513987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.514127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.514152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.514290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.514315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.514439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.514465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.514570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.514597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.514759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.514784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.514889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.514916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.515047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.515087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.515213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.515259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.515408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.515435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.515536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.515562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.515671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.515698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.515801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.515826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.515941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.515967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.516072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.516097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.516256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.516283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.516397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.516422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.452 [2024-07-24 19:55:07.516560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.452 [2024-07-24 19:55:07.516585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.452 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.516697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.516722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.516841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.516869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.516975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.517004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.517108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.517134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.517238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.517270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.517386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.517411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.517518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.517543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.517675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.517700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 Malloc0 00:25:50.453 [2024-07-24 19:55:07.517838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.517864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.517999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.518024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.518118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.518143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.518256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.518282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:50.453 [2024-07-24 19:55:07.518383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.518409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.518514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.518541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:50.453 [2024-07-24 19:55:07.518651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.518677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.518775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.518800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.518897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:50.453 [2024-07-24 19:55:07.518924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.519035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.519060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.519157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.519183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.519333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.519365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.519481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.519507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.519611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.519638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.519749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.519776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.519888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.519913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.520048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.520074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.520208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.520234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.520347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.520373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.520513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.520538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.520650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.520675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.520833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.520858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.520965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.520990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.521084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.521109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.521249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.521275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.521385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.521410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.453 [2024-07-24 19:55:07.521424] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.453 [2024-07-24 19:55:07.521513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.453 [2024-07-24 19:55:07.521538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.453 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.521679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.521704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.521812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.521837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.521942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.521970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.522087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.522113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.522212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.522236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.522372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.522400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.522568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.522593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.522706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.522731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.522833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.522860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.522970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.522995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.523102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.523127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.523260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.523285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.523402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.523428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.523554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.523579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.523688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.523714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.523819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.523844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.523951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.523976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.524116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.524144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.524311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.524338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.524469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.524499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.524609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.524636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.524772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.524797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.524925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.524950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.525058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.525083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.525191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.525216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.525342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.525368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.525511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.525535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.525679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.525706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.525821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.525846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.525950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.525977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.526082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.526107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.526220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.526249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.526367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.526392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.526510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.526535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.526669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.526695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.526834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.526862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.526973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.454 [2024-07-24 19:55:07.527000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.454 qpair failed and we were unable to recover it. 00:25:50.454 [2024-07-24 19:55:07.527105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.527130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.527232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.527263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.527408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.527434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.527534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.527560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.527660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.527686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.527793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.527818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.527958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.527983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.528099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.528124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.528257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.528283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.528389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.528421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.528560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.528584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.528722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.528747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.528853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.528878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.528988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.529015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.529123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.529148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.529259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.529285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.529414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.529439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.529555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.529581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.529688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.529714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.529816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.529843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:50.455 [2024-07-24 19:55:07.529959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.529986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.455 [2024-07-24 19:55:07.530113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.530139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:50.455 [2024-07-24 19:55:07.530265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.530300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.455 [2024-07-24 19:55:07.530409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.530434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.530563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.530588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.530697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.530722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.530862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.530887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.530985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.531010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.531117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.531145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.531260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.531287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.531404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.531429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.531533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.531559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.531688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.531714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.531823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.455 [2024-07-24 19:55:07.531848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.455 qpair failed and we were unable to recover it. 00:25:50.455 [2024-07-24 19:55:07.532007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.532038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.532174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.532200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.532308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.532333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.532433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.532459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.532565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.532591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.532697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.532722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.532820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.532845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.532946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.532972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.533069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.533094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.533239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.533270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.533399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.533424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.533552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.533577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.533681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.533706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.533816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.533843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.533953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.533978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.534113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.534140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.534281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.534307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.534426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.534451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.534548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.534573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.534678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.534703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.534829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.534855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.534984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.535009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.535119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.535144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.535247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.535272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.535373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.535398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.535530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.535556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.535654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.535679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.535778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.535807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.535911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.535937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.536031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.536056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.536194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.536219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.536335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.456 [2024-07-24 19:55:07.536360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.456 qpair failed and we were unable to recover it. 00:25:50.456 [2024-07-24 19:55:07.536470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.536495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.536600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.536625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.536737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.536762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.536889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.536914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.537044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.537069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.537171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.537196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.537314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.537340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.537465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.537490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.537593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.537619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:50.457 [2024-07-24 19:55:07.537759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.537785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:50.457 [2024-07-24 19:55:07.537912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.537952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.457 [2024-07-24 19:55:07.538070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.538097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.538231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.538273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.538384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.538411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.538549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.538575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.538680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.538705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.538806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.538832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.538959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.538985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.539102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.539128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.539235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.539268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.539383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.539408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.539525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.539550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.539652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.539681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.539796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.539822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.539926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.539953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.540113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.540138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.540283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.540311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.540413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.540438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.540547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.540574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.540707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.540733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.540862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.540888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.540991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.541018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.541158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.541184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.541293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.541318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.541434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.541462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.457 [2024-07-24 19:55:07.541570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.457 [2024-07-24 19:55:07.541596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.457 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.541703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.541729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.541835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.541861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.541973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.541998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.542113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.542139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.542277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.542303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.542415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.542442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.542548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.542573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.542681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.542706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.542840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.542865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.542995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.543021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.543128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.543155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.543260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.543286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.543391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.543417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.543555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.543580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.543695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.543720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.543851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.543877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.543980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.544006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.544145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.544171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.544306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.544333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.544441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.544467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.544570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.544596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.544698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.544723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.544825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.544850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.544953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.544978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.545071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.545097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.545200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.545227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.545351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.545378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.545520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.545545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.545673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:50.458 [2024-07-24 19:55:07.545698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.545817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.458 [2024-07-24 19:55:07.545842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:50.458 [2024-07-24 19:55:07.545980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.546006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.458 [2024-07-24 19:55:07.546123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.546151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.546265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.546291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.546399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.546424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.546554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.458 [2024-07-24 19:55:07.546579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.458 qpair failed and we were unable to recover it. 00:25:50.458 [2024-07-24 19:55:07.546709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.546734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.546835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.546864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.546997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.547022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.547135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.547160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.547261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.547287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.547412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.547437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.547553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.547578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.547683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.547707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.547811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.547836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.547949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.547974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.548083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.548108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.548271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.548310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.548429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.548455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.548588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.548614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.548739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.548765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce7c000b90 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.548888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.548921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.549025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.549052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fce84000b90 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.549157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.549184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.549354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.549380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.549484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.459 [2024-07-24 19:55:07.549510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5b5250 with addr=10.0.0.2, port=4420 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.549792] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.459 [2024-07-24 19:55:07.552129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.459 [2024-07-24 19:55:07.552271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.459 [2024-07-24 19:55:07.552299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.459 [2024-07-24 19:55:07.552315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.459 [2024-07-24 19:55:07.552328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.459 [2024-07-24 19:55:07.552363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:50.459 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:50.459 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@562 -- # xtrace_disable 00:25:50.459 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:50.459 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:25:50.459 19:55:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1285230 00:25:50.459 [2024-07-24 19:55:07.562096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.459 [2024-07-24 19:55:07.562204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.459 [2024-07-24 19:55:07.562231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.459 [2024-07-24 19:55:07.562253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.459 [2024-07-24 19:55:07.562268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.459 [2024-07-24 19:55:07.562304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.572027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.459 [2024-07-24 19:55:07.572154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.459 [2024-07-24 19:55:07.572180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.459 [2024-07-24 19:55:07.572194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.459 [2024-07-24 19:55:07.572207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.459 [2024-07-24 19:55:07.572236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.581970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.459 [2024-07-24 19:55:07.582090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.459 [2024-07-24 19:55:07.582115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.459 [2024-07-24 19:55:07.582130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.459 [2024-07-24 19:55:07.582142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.459 [2024-07-24 19:55:07.582171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.591993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.459 [2024-07-24 19:55:07.592102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.459 [2024-07-24 19:55:07.592127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.459 [2024-07-24 19:55:07.592142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.459 [2024-07-24 19:55:07.592157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.459 [2024-07-24 19:55:07.592186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.459 qpair failed and we were unable to recover it. 00:25:50.459 [2024-07-24 19:55:07.602059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.460 [2024-07-24 19:55:07.602175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.460 [2024-07-24 19:55:07.602200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.460 [2024-07-24 19:55:07.602214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.460 [2024-07-24 19:55:07.602227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.460 [2024-07-24 19:55:07.602263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.460 qpair failed and we were unable to recover it. 00:25:50.460 [2024-07-24 19:55:07.612050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.460 [2024-07-24 19:55:07.612151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.460 [2024-07-24 19:55:07.612182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.460 [2024-07-24 19:55:07.612197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.460 [2024-07-24 19:55:07.612210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.460 [2024-07-24 19:55:07.612239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.460 qpair failed and we were unable to recover it. 00:25:50.460 [2024-07-24 19:55:07.622095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.460 [2024-07-24 19:55:07.622202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.460 [2024-07-24 19:55:07.622228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.460 [2024-07-24 19:55:07.622249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.460 [2024-07-24 19:55:07.622263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.460 [2024-07-24 19:55:07.622292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.460 qpair failed and we were unable to recover it. 00:25:50.460 [2024-07-24 19:55:07.632102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.460 [2024-07-24 19:55:07.632216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.460 [2024-07-24 19:55:07.632248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.460 [2024-07-24 19:55:07.632266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.460 [2024-07-24 19:55:07.632279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.460 [2024-07-24 19:55:07.632308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.460 qpair failed and we were unable to recover it. 00:25:50.460 [2024-07-24 19:55:07.642188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.460 [2024-07-24 19:55:07.642298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.460 [2024-07-24 19:55:07.642325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.460 [2024-07-24 19:55:07.642339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.460 [2024-07-24 19:55:07.642352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.460 [2024-07-24 19:55:07.642380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.460 qpair failed and we were unable to recover it. 00:25:50.460 [2024-07-24 19:55:07.652191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.460 [2024-07-24 19:55:07.652321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.460 [2024-07-24 19:55:07.652347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.460 [2024-07-24 19:55:07.652362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.460 [2024-07-24 19:55:07.652375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.460 [2024-07-24 19:55:07.652409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.460 qpair failed and we were unable to recover it. 00:25:50.460 [2024-07-24 19:55:07.662192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.460 [2024-07-24 19:55:07.662309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.460 [2024-07-24 19:55:07.662334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.460 [2024-07-24 19:55:07.662349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.460 [2024-07-24 19:55:07.662362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.460 [2024-07-24 19:55:07.662391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.460 qpair failed and we were unable to recover it. 00:25:50.460 [2024-07-24 19:55:07.672216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.460 [2024-07-24 19:55:07.672340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.460 [2024-07-24 19:55:07.672365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.460 [2024-07-24 19:55:07.672380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.460 [2024-07-24 19:55:07.672393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.460 [2024-07-24 19:55:07.672422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.460 qpair failed and we were unable to recover it. 00:25:50.460 [2024-07-24 19:55:07.682281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.460 [2024-07-24 19:55:07.682383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.460 [2024-07-24 19:55:07.682408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.460 [2024-07-24 19:55:07.682423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.460 [2024-07-24 19:55:07.682436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.460 [2024-07-24 19:55:07.682464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.460 qpair failed and we were unable to recover it. 00:25:50.460 [2024-07-24 19:55:07.692362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.460 [2024-07-24 19:55:07.692466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.460 [2024-07-24 19:55:07.692492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.460 [2024-07-24 19:55:07.692506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.460 [2024-07-24 19:55:07.692519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.460 [2024-07-24 19:55:07.692550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.460 qpair failed and we were unable to recover it. 00:25:50.460 [2024-07-24 19:55:07.702298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.460 [2024-07-24 19:55:07.702415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.460 [2024-07-24 19:55:07.702446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.460 [2024-07-24 19:55:07.702463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.460 [2024-07-24 19:55:07.702476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.460 [2024-07-24 19:55:07.702505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.460 qpair failed and we were unable to recover it. 00:25:50.460 [2024-07-24 19:55:07.712341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.460 [2024-07-24 19:55:07.712446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.460 [2024-07-24 19:55:07.712472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.460 [2024-07-24 19:55:07.712486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.461 [2024-07-24 19:55:07.712499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.461 [2024-07-24 19:55:07.712527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.461 qpair failed and we were unable to recover it. 00:25:50.461 [2024-07-24 19:55:07.722374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.461 [2024-07-24 19:55:07.722482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.461 [2024-07-24 19:55:07.722506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.461 [2024-07-24 19:55:07.722521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.461 [2024-07-24 19:55:07.722534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.461 [2024-07-24 19:55:07.722563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.461 qpair failed and we were unable to recover it. 00:25:50.461 [2024-07-24 19:55:07.732361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.461 [2024-07-24 19:55:07.732463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.461 [2024-07-24 19:55:07.732488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.461 [2024-07-24 19:55:07.732503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.461 [2024-07-24 19:55:07.732516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.461 [2024-07-24 19:55:07.732544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.461 qpair failed and we were unable to recover it. 00:25:50.461 [2024-07-24 19:55:07.742398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.461 [2024-07-24 19:55:07.742507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.461 [2024-07-24 19:55:07.742532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.461 [2024-07-24 19:55:07.742547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.461 [2024-07-24 19:55:07.742565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.461 [2024-07-24 19:55:07.742594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.461 qpair failed and we were unable to recover it. 00:25:50.461 [2024-07-24 19:55:07.752451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.461 [2024-07-24 19:55:07.752563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.461 [2024-07-24 19:55:07.752588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.461 [2024-07-24 19:55:07.752603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.461 [2024-07-24 19:55:07.752616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.461 [2024-07-24 19:55:07.752645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.461 qpair failed and we were unable to recover it. 00:25:50.461 [2024-07-24 19:55:07.762472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.461 [2024-07-24 19:55:07.762579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.461 [2024-07-24 19:55:07.762604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.461 [2024-07-24 19:55:07.762619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.461 [2024-07-24 19:55:07.762632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.461 [2024-07-24 19:55:07.762659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.461 qpair failed and we were unable to recover it. 00:25:50.461 [2024-07-24 19:55:07.772480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.461 [2024-07-24 19:55:07.772636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.461 [2024-07-24 19:55:07.772661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.461 [2024-07-24 19:55:07.772677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.461 [2024-07-24 19:55:07.772690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.461 [2024-07-24 19:55:07.772718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.461 qpair failed and we were unable to recover it. 00:25:50.461 [2024-07-24 19:55:07.782613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.461 [2024-07-24 19:55:07.782724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.461 [2024-07-24 19:55:07.782751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.461 [2024-07-24 19:55:07.782765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.461 [2024-07-24 19:55:07.782778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.461 [2024-07-24 19:55:07.782808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.461 qpair failed and we were unable to recover it. 00:25:50.461 [2024-07-24 19:55:07.792549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.461 [2024-07-24 19:55:07.792691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.461 [2024-07-24 19:55:07.792716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.461 [2024-07-24 19:55:07.792731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.461 [2024-07-24 19:55:07.792744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.461 [2024-07-24 19:55:07.792773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.461 qpair failed and we were unable to recover it. 00:25:50.461 [2024-07-24 19:55:07.802569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.461 [2024-07-24 19:55:07.802675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.461 [2024-07-24 19:55:07.802701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.461 [2024-07-24 19:55:07.802715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.461 [2024-07-24 19:55:07.802728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.461 [2024-07-24 19:55:07.802756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.461 qpair failed and we were unable to recover it. 00:25:50.720 [2024-07-24 19:55:07.812633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.720 [2024-07-24 19:55:07.812779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.720 [2024-07-24 19:55:07.812805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.720 [2024-07-24 19:55:07.812819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.720 [2024-07-24 19:55:07.812832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.720 [2024-07-24 19:55:07.812860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.720 qpair failed and we were unable to recover it. 00:25:50.720 [2024-07-24 19:55:07.822615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.720 [2024-07-24 19:55:07.822731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.720 [2024-07-24 19:55:07.822757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.720 [2024-07-24 19:55:07.822771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.720 [2024-07-24 19:55:07.822784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.720 [2024-07-24 19:55:07.822812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.720 qpair failed and we were unable to recover it. 00:25:50.720 [2024-07-24 19:55:07.832652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.720 [2024-07-24 19:55:07.832749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.720 [2024-07-24 19:55:07.832775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.720 [2024-07-24 19:55:07.832789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.720 [2024-07-24 19:55:07.832808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.720 [2024-07-24 19:55:07.832839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.720 qpair failed and we were unable to recover it. 00:25:50.720 [2024-07-24 19:55:07.842722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.720 [2024-07-24 19:55:07.842825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.720 [2024-07-24 19:55:07.842850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.720 [2024-07-24 19:55:07.842865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.720 [2024-07-24 19:55:07.842877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.720 [2024-07-24 19:55:07.842906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.720 qpair failed and we were unable to recover it. 00:25:50.720 [2024-07-24 19:55:07.852714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.720 [2024-07-24 19:55:07.852820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.720 [2024-07-24 19:55:07.852845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.720 [2024-07-24 19:55:07.852860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.720 [2024-07-24 19:55:07.852873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.720 [2024-07-24 19:55:07.852901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.720 qpair failed and we were unable to recover it. 00:25:50.720 [2024-07-24 19:55:07.862783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.720 [2024-07-24 19:55:07.862896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.720 [2024-07-24 19:55:07.862921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.720 [2024-07-24 19:55:07.862936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.720 [2024-07-24 19:55:07.862949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.720 [2024-07-24 19:55:07.862977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.720 qpair failed and we were unable to recover it. 00:25:50.720 [2024-07-24 19:55:07.872799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.720 [2024-07-24 19:55:07.872901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.720 [2024-07-24 19:55:07.872927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.720 [2024-07-24 19:55:07.872941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.720 [2024-07-24 19:55:07.872954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.720 [2024-07-24 19:55:07.872983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.720 qpair failed and we were unable to recover it. 00:25:50.720 [2024-07-24 19:55:07.882804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.720 [2024-07-24 19:55:07.882912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.720 [2024-07-24 19:55:07.882937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.720 [2024-07-24 19:55:07.882952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.720 [2024-07-24 19:55:07.882964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.721 [2024-07-24 19:55:07.882992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.721 qpair failed and we were unable to recover it. 00:25:50.721 [2024-07-24 19:55:07.892828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.721 [2024-07-24 19:55:07.892932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.721 [2024-07-24 19:55:07.892957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.721 [2024-07-24 19:55:07.892971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.721 [2024-07-24 19:55:07.892984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.721 [2024-07-24 19:55:07.893012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.721 qpair failed and we were unable to recover it. 00:25:50.721 [2024-07-24 19:55:07.902886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.721 [2024-07-24 19:55:07.903042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.721 [2024-07-24 19:55:07.903070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.721 [2024-07-24 19:55:07.903085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.721 [2024-07-24 19:55:07.903098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.721 [2024-07-24 19:55:07.903127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.721 qpair failed and we were unable to recover it. 00:25:50.721 [2024-07-24 19:55:07.912857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.721 [2024-07-24 19:55:07.912961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.721 [2024-07-24 19:55:07.912987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.721 [2024-07-24 19:55:07.913002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.721 [2024-07-24 19:55:07.913015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.721 [2024-07-24 19:55:07.913043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.721 qpair failed and we were unable to recover it. 00:25:50.721 [2024-07-24 19:55:07.922920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.721 [2024-07-24 19:55:07.923036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.721 [2024-07-24 19:55:07.923061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.721 [2024-07-24 19:55:07.923075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.721 [2024-07-24 19:55:07.923096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.721 [2024-07-24 19:55:07.923125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.721 qpair failed and we were unable to recover it. 00:25:50.721 [2024-07-24 19:55:07.932937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.721 [2024-07-24 19:55:07.933043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.721 [2024-07-24 19:55:07.933069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.721 [2024-07-24 19:55:07.933084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.721 [2024-07-24 19:55:07.933100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.721 [2024-07-24 19:55:07.933130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.721 qpair failed and we were unable to recover it. 00:25:50.721 [2024-07-24 19:55:07.942969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.721 [2024-07-24 19:55:07.943076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.721 [2024-07-24 19:55:07.943101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.721 [2024-07-24 19:55:07.943116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.721 [2024-07-24 19:55:07.943129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.721 [2024-07-24 19:55:07.943158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.721 qpair failed and we were unable to recover it. 00:25:50.721 [2024-07-24 19:55:07.952984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.721 [2024-07-24 19:55:07.953104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.721 [2024-07-24 19:55:07.953130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.721 [2024-07-24 19:55:07.953145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.721 [2024-07-24 19:55:07.953161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.721 [2024-07-24 19:55:07.953192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.721 qpair failed and we were unable to recover it. 00:25:50.721 [2024-07-24 19:55:07.963012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.721 [2024-07-24 19:55:07.963129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.721 [2024-07-24 19:55:07.963155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.721 [2024-07-24 19:55:07.963169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.721 [2024-07-24 19:55:07.963182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.721 [2024-07-24 19:55:07.963210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.721 qpair failed and we were unable to recover it. 00:25:50.721 [2024-07-24 19:55:07.973034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.721 [2024-07-24 19:55:07.973161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.721 [2024-07-24 19:55:07.973187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.721 [2024-07-24 19:55:07.973201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.721 [2024-07-24 19:55:07.973214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.721 [2024-07-24 19:55:07.973250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.721 qpair failed and we were unable to recover it. 00:25:50.721 [2024-07-24 19:55:07.983083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.721 [2024-07-24 19:55:07.983203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.721 [2024-07-24 19:55:07.983229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.721 [2024-07-24 19:55:07.983252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.721 [2024-07-24 19:55:07.983271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.721 [2024-07-24 19:55:07.983300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.721 qpair failed and we were unable to recover it. 00:25:50.721 [2024-07-24 19:55:07.993096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.721 [2024-07-24 19:55:07.993197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.721 [2024-07-24 19:55:07.993222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.721 [2024-07-24 19:55:07.993236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.721 [2024-07-24 19:55:07.993262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.721 [2024-07-24 19:55:07.993292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.721 qpair failed and we were unable to recover it. 00:25:50.721 [2024-07-24 19:55:08.003144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.721 [2024-07-24 19:55:08.003267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.721 [2024-07-24 19:55:08.003292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.721 [2024-07-24 19:55:08.003306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.721 [2024-07-24 19:55:08.003318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.721 [2024-07-24 19:55:08.003347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.721 qpair failed and we were unable to recover it. 00:25:50.721 [2024-07-24 19:55:08.013191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.721 [2024-07-24 19:55:08.013326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.721 [2024-07-24 19:55:08.013351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.721 [2024-07-24 19:55:08.013372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.721 [2024-07-24 19:55:08.013386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.722 [2024-07-24 19:55:08.013415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.722 qpair failed and we were unable to recover it. 00:25:50.722 [2024-07-24 19:55:08.023306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.722 [2024-07-24 19:55:08.023437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.722 [2024-07-24 19:55:08.023462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.722 [2024-07-24 19:55:08.023477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.722 [2024-07-24 19:55:08.023489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.722 [2024-07-24 19:55:08.023523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.722 qpair failed and we were unable to recover it. 00:25:50.722 [2024-07-24 19:55:08.033262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.722 [2024-07-24 19:55:08.033381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.722 [2024-07-24 19:55:08.033407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.722 [2024-07-24 19:55:08.033421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.722 [2024-07-24 19:55:08.033434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.722 [2024-07-24 19:55:08.033463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.722 qpair failed and we were unable to recover it. 00:25:50.722 [2024-07-24 19:55:08.043239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.722 [2024-07-24 19:55:08.043347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.722 [2024-07-24 19:55:08.043372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.722 [2024-07-24 19:55:08.043387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.722 [2024-07-24 19:55:08.043400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.722 [2024-07-24 19:55:08.043428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.722 qpair failed and we were unable to recover it. 00:25:50.722 [2024-07-24 19:55:08.053305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.722 [2024-07-24 19:55:08.053418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.722 [2024-07-24 19:55:08.053443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.722 [2024-07-24 19:55:08.053457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.722 [2024-07-24 19:55:08.053469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.722 [2024-07-24 19:55:08.053498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.722 qpair failed and we were unable to recover it. 00:25:50.722 [2024-07-24 19:55:08.063325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.722 [2024-07-24 19:55:08.063462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.722 [2024-07-24 19:55:08.063487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.722 [2024-07-24 19:55:08.063501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.722 [2024-07-24 19:55:08.063514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.722 [2024-07-24 19:55:08.063542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.722 qpair failed and we were unable to recover it. 00:25:50.722 [2024-07-24 19:55:08.073331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.722 [2024-07-24 19:55:08.073448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.722 [2024-07-24 19:55:08.073473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.722 [2024-07-24 19:55:08.073488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.722 [2024-07-24 19:55:08.073501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.722 [2024-07-24 19:55:08.073529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.722 qpair failed and we were unable to recover it. 00:25:50.722 [2024-07-24 19:55:08.083378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.722 [2024-07-24 19:55:08.083500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.722 [2024-07-24 19:55:08.083525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.722 [2024-07-24 19:55:08.083539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.722 [2024-07-24 19:55:08.083552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.722 [2024-07-24 19:55:08.083581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.722 qpair failed and we were unable to recover it. 00:25:50.722 [2024-07-24 19:55:08.093397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.722 [2024-07-24 19:55:08.093499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.722 [2024-07-24 19:55:08.093524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.722 [2024-07-24 19:55:08.093538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.722 [2024-07-24 19:55:08.093552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.722 [2024-07-24 19:55:08.093582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.722 qpair failed and we were unable to recover it. 00:25:50.982 [2024-07-24 19:55:08.103467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.982 [2024-07-24 19:55:08.103580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.982 [2024-07-24 19:55:08.103604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.982 [2024-07-24 19:55:08.103624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.982 [2024-07-24 19:55:08.103637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.982 [2024-07-24 19:55:08.103665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.982 qpair failed and we were unable to recover it. 00:25:50.982 [2024-07-24 19:55:08.113471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.982 [2024-07-24 19:55:08.113586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.982 [2024-07-24 19:55:08.113613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.982 [2024-07-24 19:55:08.113627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.982 [2024-07-24 19:55:08.113640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.982 [2024-07-24 19:55:08.113669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.982 qpair failed and we were unable to recover it. 00:25:50.982 [2024-07-24 19:55:08.123484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.982 [2024-07-24 19:55:08.123591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.982 [2024-07-24 19:55:08.123617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.982 [2024-07-24 19:55:08.123633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.982 [2024-07-24 19:55:08.123646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.982 [2024-07-24 19:55:08.123674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.982 qpair failed and we were unable to recover it. 00:25:50.982 [2024-07-24 19:55:08.133501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.982 [2024-07-24 19:55:08.133610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.982 [2024-07-24 19:55:08.133636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.982 [2024-07-24 19:55:08.133651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.982 [2024-07-24 19:55:08.133664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.982 [2024-07-24 19:55:08.133692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.982 qpair failed and we were unable to recover it. 00:25:50.982 [2024-07-24 19:55:08.143544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.982 [2024-07-24 19:55:08.143653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.982 [2024-07-24 19:55:08.143679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.982 [2024-07-24 19:55:08.143693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.982 [2024-07-24 19:55:08.143707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.982 [2024-07-24 19:55:08.143735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.982 qpair failed and we were unable to recover it. 00:25:50.982 [2024-07-24 19:55:08.153574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.983 [2024-07-24 19:55:08.153676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.983 [2024-07-24 19:55:08.153701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.983 [2024-07-24 19:55:08.153716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.983 [2024-07-24 19:55:08.153729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.983 [2024-07-24 19:55:08.153757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.983 qpair failed and we were unable to recover it. 00:25:50.983 [2024-07-24 19:55:08.163615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.983 [2024-07-24 19:55:08.163732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.983 [2024-07-24 19:55:08.163758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.983 [2024-07-24 19:55:08.163773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.983 [2024-07-24 19:55:08.163789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.983 [2024-07-24 19:55:08.163819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.983 qpair failed and we were unable to recover it. 00:25:50.983 [2024-07-24 19:55:08.173664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.983 [2024-07-24 19:55:08.173794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.983 [2024-07-24 19:55:08.173819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.983 [2024-07-24 19:55:08.173833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.983 [2024-07-24 19:55:08.173846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.983 [2024-07-24 19:55:08.173874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.983 qpair failed and we were unable to recover it. 00:25:50.983 [2024-07-24 19:55:08.183671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.983 [2024-07-24 19:55:08.183776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.983 [2024-07-24 19:55:08.183801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.983 [2024-07-24 19:55:08.183816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.983 [2024-07-24 19:55:08.183829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.983 [2024-07-24 19:55:08.183857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.983 qpair failed and we were unable to recover it. 00:25:50.983 [2024-07-24 19:55:08.193700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.983 [2024-07-24 19:55:08.193810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.983 [2024-07-24 19:55:08.193836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.983 [2024-07-24 19:55:08.193857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.983 [2024-07-24 19:55:08.193872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.983 [2024-07-24 19:55:08.193901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.983 qpair failed and we were unable to recover it. 00:25:50.983 [2024-07-24 19:55:08.203725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.983 [2024-07-24 19:55:08.203830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.983 [2024-07-24 19:55:08.203855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.983 [2024-07-24 19:55:08.203871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.983 [2024-07-24 19:55:08.203885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.983 [2024-07-24 19:55:08.203914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.983 qpair failed and we were unable to recover it. 00:25:50.983 [2024-07-24 19:55:08.213726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.983 [2024-07-24 19:55:08.213826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.983 [2024-07-24 19:55:08.213851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.983 [2024-07-24 19:55:08.213865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.983 [2024-07-24 19:55:08.213879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.983 [2024-07-24 19:55:08.213907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.983 qpair failed and we were unable to recover it. 00:25:50.983 [2024-07-24 19:55:08.223816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.983 [2024-07-24 19:55:08.223923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.983 [2024-07-24 19:55:08.223949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.983 [2024-07-24 19:55:08.223963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.983 [2024-07-24 19:55:08.223976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.983 [2024-07-24 19:55:08.224004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.983 qpair failed and we were unable to recover it. 00:25:50.983 [2024-07-24 19:55:08.233837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.983 [2024-07-24 19:55:08.233945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.983 [2024-07-24 19:55:08.233971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.983 [2024-07-24 19:55:08.233986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.983 [2024-07-24 19:55:08.233999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.983 [2024-07-24 19:55:08.234027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.983 qpair failed and we were unable to recover it. 00:25:50.983 [2024-07-24 19:55:08.243847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.983 [2024-07-24 19:55:08.243947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.983 [2024-07-24 19:55:08.243973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.983 [2024-07-24 19:55:08.243987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.983 [2024-07-24 19:55:08.244000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.983 [2024-07-24 19:55:08.244029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.983 qpair failed and we were unable to recover it. 00:25:50.983 [2024-07-24 19:55:08.253881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.983 [2024-07-24 19:55:08.254032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.983 [2024-07-24 19:55:08.254058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.983 [2024-07-24 19:55:08.254072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.983 [2024-07-24 19:55:08.254089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.983 [2024-07-24 19:55:08.254117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.983 qpair failed and we were unable to recover it. 00:25:50.983 [2024-07-24 19:55:08.263905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.983 [2024-07-24 19:55:08.264032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.983 [2024-07-24 19:55:08.264057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.983 [2024-07-24 19:55:08.264072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.983 [2024-07-24 19:55:08.264085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.983 [2024-07-24 19:55:08.264112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.983 qpair failed and we were unable to recover it. 00:25:50.983 [2024-07-24 19:55:08.273923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.983 [2024-07-24 19:55:08.274031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.983 [2024-07-24 19:55:08.274056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.983 [2024-07-24 19:55:08.274071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.983 [2024-07-24 19:55:08.274084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.983 [2024-07-24 19:55:08.274112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.983 qpair failed and we were unable to recover it. 00:25:50.983 [2024-07-24 19:55:08.283966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.983 [2024-07-24 19:55:08.284072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.984 [2024-07-24 19:55:08.284097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.984 [2024-07-24 19:55:08.284118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.984 [2024-07-24 19:55:08.284132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.984 [2024-07-24 19:55:08.284161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.984 qpair failed and we were unable to recover it. 00:25:50.984 [2024-07-24 19:55:08.293993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.984 [2024-07-24 19:55:08.294096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.984 [2024-07-24 19:55:08.294125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.984 [2024-07-24 19:55:08.294140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.984 [2024-07-24 19:55:08.294153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.984 [2024-07-24 19:55:08.294183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.984 qpair failed and we were unable to recover it. 00:25:50.984 [2024-07-24 19:55:08.304031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.984 [2024-07-24 19:55:08.304155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.984 [2024-07-24 19:55:08.304180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.984 [2024-07-24 19:55:08.304195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.984 [2024-07-24 19:55:08.304208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.984 [2024-07-24 19:55:08.304250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.984 qpair failed and we were unable to recover it. 00:25:50.984 [2024-07-24 19:55:08.314096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.984 [2024-07-24 19:55:08.314209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.984 [2024-07-24 19:55:08.314234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.984 [2024-07-24 19:55:08.314255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.984 [2024-07-24 19:55:08.314269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.984 [2024-07-24 19:55:08.314298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.984 qpair failed and we were unable to recover it. 00:25:50.984 [2024-07-24 19:55:08.324093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.984 [2024-07-24 19:55:08.324221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.984 [2024-07-24 19:55:08.324252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.984 [2024-07-24 19:55:08.324278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.984 [2024-07-24 19:55:08.324292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.984 [2024-07-24 19:55:08.324321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.984 qpair failed and we were unable to recover it. 00:25:50.984 [2024-07-24 19:55:08.334085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.984 [2024-07-24 19:55:08.334189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.984 [2024-07-24 19:55:08.334214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.984 [2024-07-24 19:55:08.334228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.984 [2024-07-24 19:55:08.334248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.984 [2024-07-24 19:55:08.334279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.984 qpair failed and we were unable to recover it. 00:25:50.984 [2024-07-24 19:55:08.344279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.984 [2024-07-24 19:55:08.344404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.984 [2024-07-24 19:55:08.344429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.984 [2024-07-24 19:55:08.344444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.984 [2024-07-24 19:55:08.344457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.984 [2024-07-24 19:55:08.344485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.984 qpair failed and we were unable to recover it. 00:25:50.984 [2024-07-24 19:55:08.354221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:50.984 [2024-07-24 19:55:08.354341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:50.984 [2024-07-24 19:55:08.354368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:50.984 [2024-07-24 19:55:08.354383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:50.984 [2024-07-24 19:55:08.354398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:50.984 [2024-07-24 19:55:08.354429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:50.984 qpair failed and we were unable to recover it. 00:25:51.243 [2024-07-24 19:55:08.364237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.243 [2024-07-24 19:55:08.364359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.243 [2024-07-24 19:55:08.364385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.243 [2024-07-24 19:55:08.364399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.243 [2024-07-24 19:55:08.364411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.243 [2024-07-24 19:55:08.364440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.243 qpair failed and we were unable to recover it. 00:25:51.243 [2024-07-24 19:55:08.374257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.243 [2024-07-24 19:55:08.374375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.243 [2024-07-24 19:55:08.374406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.243 [2024-07-24 19:55:08.374421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.243 [2024-07-24 19:55:08.374434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.243 [2024-07-24 19:55:08.374462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.243 qpair failed and we were unable to recover it. 00:25:51.243 [2024-07-24 19:55:08.384269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.243 [2024-07-24 19:55:08.384381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.243 [2024-07-24 19:55:08.384407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.243 [2024-07-24 19:55:08.384421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.243 [2024-07-24 19:55:08.384435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.243 [2024-07-24 19:55:08.384464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.243 qpair failed and we were unable to recover it. 00:25:51.243 [2024-07-24 19:55:08.394281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.243 [2024-07-24 19:55:08.394432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.243 [2024-07-24 19:55:08.394458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.243 [2024-07-24 19:55:08.394472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.243 [2024-07-24 19:55:08.394485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.243 [2024-07-24 19:55:08.394514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.243 qpair failed and we were unable to recover it. 00:25:51.243 [2024-07-24 19:55:08.404317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.244 [2024-07-24 19:55:08.404419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.244 [2024-07-24 19:55:08.404444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.244 [2024-07-24 19:55:08.404458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.244 [2024-07-24 19:55:08.404471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.244 [2024-07-24 19:55:08.404500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.244 qpair failed and we were unable to recover it. 00:25:51.244 [2024-07-24 19:55:08.414370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.244 [2024-07-24 19:55:08.414489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.244 [2024-07-24 19:55:08.414514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.244 [2024-07-24 19:55:08.414528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.244 [2024-07-24 19:55:08.414541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.244 [2024-07-24 19:55:08.414576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.244 qpair failed and we were unable to recover it. 00:25:51.244 [2024-07-24 19:55:08.424437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.244 [2024-07-24 19:55:08.424592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.244 [2024-07-24 19:55:08.424617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.244 [2024-07-24 19:55:08.424631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.244 [2024-07-24 19:55:08.424644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.244 [2024-07-24 19:55:08.424672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.244 qpair failed and we were unable to recover it. 00:25:51.244 [2024-07-24 19:55:08.434400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.244 [2024-07-24 19:55:08.434516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.244 [2024-07-24 19:55:08.434541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.244 [2024-07-24 19:55:08.434556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.244 [2024-07-24 19:55:08.434569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.244 [2024-07-24 19:55:08.434598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.244 qpair failed and we were unable to recover it. 00:25:51.244 [2024-07-24 19:55:08.444419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.244 [2024-07-24 19:55:08.444521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.244 [2024-07-24 19:55:08.444547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.244 [2024-07-24 19:55:08.444562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.244 [2024-07-24 19:55:08.444575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.244 [2024-07-24 19:55:08.444603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.244 qpair failed and we were unable to recover it. 00:25:51.244 [2024-07-24 19:55:08.454485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.244 [2024-07-24 19:55:08.454597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.244 [2024-07-24 19:55:08.454622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.244 [2024-07-24 19:55:08.454636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.244 [2024-07-24 19:55:08.454649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.244 [2024-07-24 19:55:08.454677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.244 qpair failed and we were unable to recover it. 00:25:51.244 [2024-07-24 19:55:08.464477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.244 [2024-07-24 19:55:08.464587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.244 [2024-07-24 19:55:08.464617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.244 [2024-07-24 19:55:08.464633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.244 [2024-07-24 19:55:08.464645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.244 [2024-07-24 19:55:08.464674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.244 qpair failed and we were unable to recover it. 00:25:51.244 [2024-07-24 19:55:08.474537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.244 [2024-07-24 19:55:08.474637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.244 [2024-07-24 19:55:08.474663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.244 [2024-07-24 19:55:08.474677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.244 [2024-07-24 19:55:08.474691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.244 [2024-07-24 19:55:08.474719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.244 qpair failed and we were unable to recover it. 00:25:51.244 [2024-07-24 19:55:08.484527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.244 [2024-07-24 19:55:08.484633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.244 [2024-07-24 19:55:08.484658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.244 [2024-07-24 19:55:08.484672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.244 [2024-07-24 19:55:08.484685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.244 [2024-07-24 19:55:08.484713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.244 qpair failed and we were unable to recover it. 00:25:51.244 [2024-07-24 19:55:08.494609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.244 [2024-07-24 19:55:08.494711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.244 [2024-07-24 19:55:08.494736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.244 [2024-07-24 19:55:08.494751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.244 [2024-07-24 19:55:08.494764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.244 [2024-07-24 19:55:08.494792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.244 qpair failed and we were unable to recover it. 00:25:51.244 [2024-07-24 19:55:08.504586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.244 [2024-07-24 19:55:08.504701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.244 [2024-07-24 19:55:08.504726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.244 [2024-07-24 19:55:08.504741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.244 [2024-07-24 19:55:08.504754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.244 [2024-07-24 19:55:08.504789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.244 qpair failed and we were unable to recover it. 00:25:51.244 [2024-07-24 19:55:08.514631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.244 [2024-07-24 19:55:08.514741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.244 [2024-07-24 19:55:08.514766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.244 [2024-07-24 19:55:08.514781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.244 [2024-07-24 19:55:08.514793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.244 [2024-07-24 19:55:08.514822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.244 qpair failed and we were unable to recover it. 00:25:51.244 [2024-07-24 19:55:08.524623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.244 [2024-07-24 19:55:08.524730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.244 [2024-07-24 19:55:08.524756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.244 [2024-07-24 19:55:08.524771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.244 [2024-07-24 19:55:08.524784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.244 [2024-07-24 19:55:08.524812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.244 qpair failed and we were unable to recover it. 00:25:51.244 [2024-07-24 19:55:08.534686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.244 [2024-07-24 19:55:08.534785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.244 [2024-07-24 19:55:08.534811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.245 [2024-07-24 19:55:08.534825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.245 [2024-07-24 19:55:08.534838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.245 [2024-07-24 19:55:08.534867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.245 qpair failed and we were unable to recover it. 00:25:51.245 [2024-07-24 19:55:08.544730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.245 [2024-07-24 19:55:08.544844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.245 [2024-07-24 19:55:08.544869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.245 [2024-07-24 19:55:08.544883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.245 [2024-07-24 19:55:08.544896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.245 [2024-07-24 19:55:08.544924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.245 qpair failed and we were unable to recover it. 00:25:51.245 [2024-07-24 19:55:08.554732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.245 [2024-07-24 19:55:08.554838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.245 [2024-07-24 19:55:08.554869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.245 [2024-07-24 19:55:08.554885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.245 [2024-07-24 19:55:08.554897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.245 [2024-07-24 19:55:08.554926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.245 qpair failed and we were unable to recover it. 00:25:51.245 [2024-07-24 19:55:08.564748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.245 [2024-07-24 19:55:08.564848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.245 [2024-07-24 19:55:08.564872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.245 [2024-07-24 19:55:08.564886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.245 [2024-07-24 19:55:08.564899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.245 [2024-07-24 19:55:08.564927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.245 qpair failed and we were unable to recover it. 00:25:51.245 [2024-07-24 19:55:08.574767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.245 [2024-07-24 19:55:08.574866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.245 [2024-07-24 19:55:08.574892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.245 [2024-07-24 19:55:08.574906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.245 [2024-07-24 19:55:08.574919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.245 [2024-07-24 19:55:08.574947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.245 qpair failed and we were unable to recover it. 00:25:51.245 [2024-07-24 19:55:08.584825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.245 [2024-07-24 19:55:08.584945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.245 [2024-07-24 19:55:08.584970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.245 [2024-07-24 19:55:08.584985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.245 [2024-07-24 19:55:08.584998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.245 [2024-07-24 19:55:08.585025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.245 qpair failed and we were unable to recover it. 00:25:51.245 [2024-07-24 19:55:08.594806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.245 [2024-07-24 19:55:08.594906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.245 [2024-07-24 19:55:08.594932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.245 [2024-07-24 19:55:08.594946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.245 [2024-07-24 19:55:08.594960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.245 [2024-07-24 19:55:08.594994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.245 qpair failed and we were unable to recover it. 00:25:51.245 [2024-07-24 19:55:08.604885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.245 [2024-07-24 19:55:08.604996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.245 [2024-07-24 19:55:08.605020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.245 [2024-07-24 19:55:08.605034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.245 [2024-07-24 19:55:08.605047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.245 [2024-07-24 19:55:08.605076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.245 qpair failed and we were unable to recover it. 00:25:51.245 [2024-07-24 19:55:08.614938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.245 [2024-07-24 19:55:08.615045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.245 [2024-07-24 19:55:08.615070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.245 [2024-07-24 19:55:08.615085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.245 [2024-07-24 19:55:08.615098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.245 [2024-07-24 19:55:08.615126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.245 qpair failed and we were unable to recover it. 00:25:51.504 [2024-07-24 19:55:08.624979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.504 [2024-07-24 19:55:08.625094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.504 [2024-07-24 19:55:08.625119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.504 [2024-07-24 19:55:08.625134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.504 [2024-07-24 19:55:08.625146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.504 [2024-07-24 19:55:08.625175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.504 qpair failed and we were unable to recover it. 00:25:51.504 [2024-07-24 19:55:08.634945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.504 [2024-07-24 19:55:08.635058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.504 [2024-07-24 19:55:08.635083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.504 [2024-07-24 19:55:08.635098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.504 [2024-07-24 19:55:08.635111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.504 [2024-07-24 19:55:08.635139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.504 qpair failed and we were unable to recover it. 00:25:51.504 [2024-07-24 19:55:08.644992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.504 [2024-07-24 19:55:08.645098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.504 [2024-07-24 19:55:08.645130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.504 [2024-07-24 19:55:08.645146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.504 [2024-07-24 19:55:08.645158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.504 [2024-07-24 19:55:08.645187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.504 qpair failed and we were unable to recover it. 00:25:51.504 [2024-07-24 19:55:08.655034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.504 [2024-07-24 19:55:08.655156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.504 [2024-07-24 19:55:08.655181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.504 [2024-07-24 19:55:08.655196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.504 [2024-07-24 19:55:08.655209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.504 [2024-07-24 19:55:08.655237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.505 qpair failed and we were unable to recover it. 00:25:51.505 [2024-07-24 19:55:08.665042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.505 [2024-07-24 19:55:08.665148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.505 [2024-07-24 19:55:08.665173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.505 [2024-07-24 19:55:08.665187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.505 [2024-07-24 19:55:08.665200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.505 [2024-07-24 19:55:08.665228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.505 qpair failed and we were unable to recover it. 00:25:51.505 [2024-07-24 19:55:08.675068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.505 [2024-07-24 19:55:08.675174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.505 [2024-07-24 19:55:08.675199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.505 [2024-07-24 19:55:08.675214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.505 [2024-07-24 19:55:08.675227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.505 [2024-07-24 19:55:08.675262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.505 qpair failed and we were unable to recover it. 00:25:51.505 [2024-07-24 19:55:08.685077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.505 [2024-07-24 19:55:08.685179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.505 [2024-07-24 19:55:08.685204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.505 [2024-07-24 19:55:08.685218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.505 [2024-07-24 19:55:08.685236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.505 [2024-07-24 19:55:08.685274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.505 qpair failed and we were unable to recover it. 00:25:51.505 [2024-07-24 19:55:08.695132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.505 [2024-07-24 19:55:08.695235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.505 [2024-07-24 19:55:08.695268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.505 [2024-07-24 19:55:08.695283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.505 [2024-07-24 19:55:08.695296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.505 [2024-07-24 19:55:08.695324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.505 qpair failed and we were unable to recover it. 00:25:51.505 [2024-07-24 19:55:08.705187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.505 [2024-07-24 19:55:08.705308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.505 [2024-07-24 19:55:08.705333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.505 [2024-07-24 19:55:08.705347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.505 [2024-07-24 19:55:08.705360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.505 [2024-07-24 19:55:08.705388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.505 qpair failed and we were unable to recover it. 00:25:51.505 [2024-07-24 19:55:08.715194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.505 [2024-07-24 19:55:08.715313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.505 [2024-07-24 19:55:08.715340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.505 [2024-07-24 19:55:08.715354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.505 [2024-07-24 19:55:08.715367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.505 [2024-07-24 19:55:08.715396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.505 qpair failed and we were unable to recover it. 00:25:51.505 [2024-07-24 19:55:08.725180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.505 [2024-07-24 19:55:08.725282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.505 [2024-07-24 19:55:08.725307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.505 [2024-07-24 19:55:08.725321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.505 [2024-07-24 19:55:08.725334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.505 [2024-07-24 19:55:08.725362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.505 qpair failed and we were unable to recover it. 00:25:51.505 [2024-07-24 19:55:08.735259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.505 [2024-07-24 19:55:08.735383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.505 [2024-07-24 19:55:08.735409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.505 [2024-07-24 19:55:08.735424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.505 [2024-07-24 19:55:08.735437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.505 [2024-07-24 19:55:08.735465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.505 qpair failed and we were unable to recover it. 00:25:51.505 [2024-07-24 19:55:08.745273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.505 [2024-07-24 19:55:08.745382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.505 [2024-07-24 19:55:08.745407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.505 [2024-07-24 19:55:08.745421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.505 [2024-07-24 19:55:08.745434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.505 [2024-07-24 19:55:08.745462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.505 qpair failed and we were unable to recover it. 00:25:51.505 [2024-07-24 19:55:08.755288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.505 [2024-07-24 19:55:08.755395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.505 [2024-07-24 19:55:08.755420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.505 [2024-07-24 19:55:08.755435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.505 [2024-07-24 19:55:08.755448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.505 [2024-07-24 19:55:08.755476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.505 qpair failed and we were unable to recover it. 00:25:51.505 [2024-07-24 19:55:08.765393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.505 [2024-07-24 19:55:08.765501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.505 [2024-07-24 19:55:08.765526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.505 [2024-07-24 19:55:08.765541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.505 [2024-07-24 19:55:08.765554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.505 [2024-07-24 19:55:08.765584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.505 qpair failed and we were unable to recover it. 00:25:51.505 [2024-07-24 19:55:08.775325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.505 [2024-07-24 19:55:08.775426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.505 [2024-07-24 19:55:08.775451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.505 [2024-07-24 19:55:08.775468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.505 [2024-07-24 19:55:08.775486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.505 [2024-07-24 19:55:08.775517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.505 qpair failed and we were unable to recover it. 00:25:51.505 [2024-07-24 19:55:08.785405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.505 [2024-07-24 19:55:08.785512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.505 [2024-07-24 19:55:08.785538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.505 [2024-07-24 19:55:08.785552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.505 [2024-07-24 19:55:08.785565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.505 [2024-07-24 19:55:08.785593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.505 qpair failed and we were unable to recover it. 00:25:51.505 [2024-07-24 19:55:08.795413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.506 [2024-07-24 19:55:08.795521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.506 [2024-07-24 19:55:08.795547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.506 [2024-07-24 19:55:08.795561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.506 [2024-07-24 19:55:08.795574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.506 [2024-07-24 19:55:08.795602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.506 qpair failed and we were unable to recover it. 00:25:51.506 [2024-07-24 19:55:08.805412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.506 [2024-07-24 19:55:08.805515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.506 [2024-07-24 19:55:08.805540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.506 [2024-07-24 19:55:08.805554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.506 [2024-07-24 19:55:08.805567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.506 [2024-07-24 19:55:08.805595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.506 qpair failed and we were unable to recover it. 00:25:51.506 [2024-07-24 19:55:08.815433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.506 [2024-07-24 19:55:08.815536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.506 [2024-07-24 19:55:08.815561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.506 [2024-07-24 19:55:08.815575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.506 [2024-07-24 19:55:08.815588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.506 [2024-07-24 19:55:08.815616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.506 qpair failed and we were unable to recover it. 00:25:51.506 [2024-07-24 19:55:08.825483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.506 [2024-07-24 19:55:08.825602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.506 [2024-07-24 19:55:08.825627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.506 [2024-07-24 19:55:08.825641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.506 [2024-07-24 19:55:08.825654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.506 [2024-07-24 19:55:08.825682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.506 qpair failed and we were unable to recover it. 00:25:51.506 [2024-07-24 19:55:08.835509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.506 [2024-07-24 19:55:08.835614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.506 [2024-07-24 19:55:08.835639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.506 [2024-07-24 19:55:08.835654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.506 [2024-07-24 19:55:08.835666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.506 [2024-07-24 19:55:08.835695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.506 qpair failed and we were unable to recover it. 00:25:51.506 [2024-07-24 19:55:08.845544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.506 [2024-07-24 19:55:08.845648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.506 [2024-07-24 19:55:08.845673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.506 [2024-07-24 19:55:08.845687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.506 [2024-07-24 19:55:08.845700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.506 [2024-07-24 19:55:08.845728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.506 qpair failed and we were unable to recover it. 00:25:51.506 [2024-07-24 19:55:08.855550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.506 [2024-07-24 19:55:08.855647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.506 [2024-07-24 19:55:08.855672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.506 [2024-07-24 19:55:08.855687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.506 [2024-07-24 19:55:08.855700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.506 [2024-07-24 19:55:08.855727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.506 qpair failed and we were unable to recover it. 00:25:51.506 [2024-07-24 19:55:08.865623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.506 [2024-07-24 19:55:08.865736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.506 [2024-07-24 19:55:08.865761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.506 [2024-07-24 19:55:08.865776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.506 [2024-07-24 19:55:08.865795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.506 [2024-07-24 19:55:08.865824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.506 qpair failed and we were unable to recover it. 00:25:51.506 [2024-07-24 19:55:08.875609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.506 [2024-07-24 19:55:08.875720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.506 [2024-07-24 19:55:08.875745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.506 [2024-07-24 19:55:08.875760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.506 [2024-07-24 19:55:08.875772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.506 [2024-07-24 19:55:08.875801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.506 qpair failed and we were unable to recover it. 00:25:51.765 [2024-07-24 19:55:08.885635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.765 [2024-07-24 19:55:08.885738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.765 [2024-07-24 19:55:08.885763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.765 [2024-07-24 19:55:08.885777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.765 [2024-07-24 19:55:08.885789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.765 [2024-07-24 19:55:08.885817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.765 qpair failed and we were unable to recover it. 00:25:51.765 [2024-07-24 19:55:08.895658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.765 [2024-07-24 19:55:08.895761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.765 [2024-07-24 19:55:08.895787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.765 [2024-07-24 19:55:08.895801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.765 [2024-07-24 19:55:08.895814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.765 [2024-07-24 19:55:08.895843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.765 qpair failed and we were unable to recover it. 00:25:51.765 [2024-07-24 19:55:08.905714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.765 [2024-07-24 19:55:08.905828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.765 [2024-07-24 19:55:08.905853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.765 [2024-07-24 19:55:08.905868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.765 [2024-07-24 19:55:08.905881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.765 [2024-07-24 19:55:08.905909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.765 qpair failed and we were unable to recover it. 00:25:51.765 [2024-07-24 19:55:08.915710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.765 [2024-07-24 19:55:08.915819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.766 [2024-07-24 19:55:08.915845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.766 [2024-07-24 19:55:08.915861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.766 [2024-07-24 19:55:08.915874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.766 [2024-07-24 19:55:08.915902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.766 qpair failed and we were unable to recover it. 00:25:51.766 [2024-07-24 19:55:08.925786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.766 [2024-07-24 19:55:08.925905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.766 [2024-07-24 19:55:08.925929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.766 [2024-07-24 19:55:08.925944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.766 [2024-07-24 19:55:08.925957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.766 [2024-07-24 19:55:08.925985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.766 qpair failed and we were unable to recover it. 00:25:51.766 [2024-07-24 19:55:08.935780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.766 [2024-07-24 19:55:08.935878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.766 [2024-07-24 19:55:08.935903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.766 [2024-07-24 19:55:08.935917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.766 [2024-07-24 19:55:08.935930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.766 [2024-07-24 19:55:08.935958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.766 qpair failed and we were unable to recover it. 00:25:51.766 [2024-07-24 19:55:08.945799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.766 [2024-07-24 19:55:08.945905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.766 [2024-07-24 19:55:08.945930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.766 [2024-07-24 19:55:08.945944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.766 [2024-07-24 19:55:08.945957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.766 [2024-07-24 19:55:08.945985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.766 qpair failed and we were unable to recover it. 00:25:51.766 [2024-07-24 19:55:08.955888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.766 [2024-07-24 19:55:08.956044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.766 [2024-07-24 19:55:08.956069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.766 [2024-07-24 19:55:08.956083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.766 [2024-07-24 19:55:08.956102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.766 [2024-07-24 19:55:08.956131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.766 qpair failed and we were unable to recover it. 00:25:51.766 [2024-07-24 19:55:08.965842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.766 [2024-07-24 19:55:08.965940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.766 [2024-07-24 19:55:08.965964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.766 [2024-07-24 19:55:08.965979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.766 [2024-07-24 19:55:08.965992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.766 [2024-07-24 19:55:08.966019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.766 qpair failed and we were unable to recover it. 00:25:51.766 [2024-07-24 19:55:08.975879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.766 [2024-07-24 19:55:08.975982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.766 [2024-07-24 19:55:08.976007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.766 [2024-07-24 19:55:08.976021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.766 [2024-07-24 19:55:08.976033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.766 [2024-07-24 19:55:08.976062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.766 qpair failed and we were unable to recover it. 00:25:51.766 [2024-07-24 19:55:08.985948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.766 [2024-07-24 19:55:08.986059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.766 [2024-07-24 19:55:08.986083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.766 [2024-07-24 19:55:08.986098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.766 [2024-07-24 19:55:08.986111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.766 [2024-07-24 19:55:08.986139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.766 qpair failed and we were unable to recover it. 00:25:51.766 [2024-07-24 19:55:08.995918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.766 [2024-07-24 19:55:08.996026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.766 [2024-07-24 19:55:08.996052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.766 [2024-07-24 19:55:08.996066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.766 [2024-07-24 19:55:08.996079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.766 [2024-07-24 19:55:08.996108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.766 qpair failed and we were unable to recover it. 00:25:51.766 [2024-07-24 19:55:09.005960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.766 [2024-07-24 19:55:09.006076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.766 [2024-07-24 19:55:09.006101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.766 [2024-07-24 19:55:09.006115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.766 [2024-07-24 19:55:09.006128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.766 [2024-07-24 19:55:09.006156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.766 qpair failed and we were unable to recover it. 00:25:51.766 [2024-07-24 19:55:09.016037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.766 [2024-07-24 19:55:09.016172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.766 [2024-07-24 19:55:09.016197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.766 [2024-07-24 19:55:09.016212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.766 [2024-07-24 19:55:09.016225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.766 [2024-07-24 19:55:09.016260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.766 qpair failed and we were unable to recover it. 00:25:51.766 [2024-07-24 19:55:09.026049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.766 [2024-07-24 19:55:09.026153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.766 [2024-07-24 19:55:09.026178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.766 [2024-07-24 19:55:09.026192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.766 [2024-07-24 19:55:09.026205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.766 [2024-07-24 19:55:09.026233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.766 qpair failed and we were unable to recover it. 00:25:51.766 [2024-07-24 19:55:09.036061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.766 [2024-07-24 19:55:09.036168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.766 [2024-07-24 19:55:09.036194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.766 [2024-07-24 19:55:09.036208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.766 [2024-07-24 19:55:09.036221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.766 [2024-07-24 19:55:09.036256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.766 qpair failed and we were unable to recover it. 00:25:51.766 [2024-07-24 19:55:09.046077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.766 [2024-07-24 19:55:09.046174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.766 [2024-07-24 19:55:09.046199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.767 [2024-07-24 19:55:09.046220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.767 [2024-07-24 19:55:09.046234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.767 [2024-07-24 19:55:09.046272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.767 qpair failed and we were unable to recover it. 00:25:51.767 [2024-07-24 19:55:09.056091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.767 [2024-07-24 19:55:09.056216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.767 [2024-07-24 19:55:09.056247] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.767 [2024-07-24 19:55:09.056264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.767 [2024-07-24 19:55:09.056276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.767 [2024-07-24 19:55:09.056304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.767 qpair failed and we were unable to recover it. 00:25:51.767 [2024-07-24 19:55:09.066169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.767 [2024-07-24 19:55:09.066333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.767 [2024-07-24 19:55:09.066358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.767 [2024-07-24 19:55:09.066372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.767 [2024-07-24 19:55:09.066385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.767 [2024-07-24 19:55:09.066413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.767 qpair failed and we were unable to recover it. 00:25:51.767 [2024-07-24 19:55:09.076185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.767 [2024-07-24 19:55:09.076312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.767 [2024-07-24 19:55:09.076338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.767 [2024-07-24 19:55:09.076352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.767 [2024-07-24 19:55:09.076365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.767 [2024-07-24 19:55:09.076394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.767 qpair failed and we were unable to recover it. 00:25:51.767 [2024-07-24 19:55:09.086206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.767 [2024-07-24 19:55:09.086326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.767 [2024-07-24 19:55:09.086354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.767 [2024-07-24 19:55:09.086369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.767 [2024-07-24 19:55:09.086382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.767 [2024-07-24 19:55:09.086412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.767 qpair failed and we were unable to recover it. 00:25:51.767 [2024-07-24 19:55:09.096253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.767 [2024-07-24 19:55:09.096357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.767 [2024-07-24 19:55:09.096383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.767 [2024-07-24 19:55:09.096399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.767 [2024-07-24 19:55:09.096412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.767 [2024-07-24 19:55:09.096441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.767 qpair failed and we were unable to recover it. 00:25:51.767 [2024-07-24 19:55:09.106289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.767 [2024-07-24 19:55:09.106391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.767 [2024-07-24 19:55:09.106415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.767 [2024-07-24 19:55:09.106429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.767 [2024-07-24 19:55:09.106441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.767 [2024-07-24 19:55:09.106469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.767 qpair failed and we were unable to recover it. 00:25:51.767 [2024-07-24 19:55:09.116306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.767 [2024-07-24 19:55:09.116409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.767 [2024-07-24 19:55:09.116435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.767 [2024-07-24 19:55:09.116449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.767 [2024-07-24 19:55:09.116462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.767 [2024-07-24 19:55:09.116491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.767 qpair failed and we were unable to recover it. 00:25:51.767 [2024-07-24 19:55:09.126403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.767 [2024-07-24 19:55:09.126506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.767 [2024-07-24 19:55:09.126534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.767 [2024-07-24 19:55:09.126548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.767 [2024-07-24 19:55:09.126562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.767 [2024-07-24 19:55:09.126593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.767 qpair failed and we were unable to recover it. 00:25:51.767 [2024-07-24 19:55:09.136379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:51.767 [2024-07-24 19:55:09.136520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:51.767 [2024-07-24 19:55:09.136546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:51.767 [2024-07-24 19:55:09.136567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:51.767 [2024-07-24 19:55:09.136581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:51.767 [2024-07-24 19:55:09.136610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:51.767 qpair failed and we were unable to recover it. 00:25:52.026 [2024-07-24 19:55:09.146366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.026 [2024-07-24 19:55:09.146507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.026 [2024-07-24 19:55:09.146532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.026 [2024-07-24 19:55:09.146546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.026 [2024-07-24 19:55:09.146559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.026 [2024-07-24 19:55:09.146586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.026 qpair failed and we were unable to recover it. 00:25:52.026 [2024-07-24 19:55:09.156472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.026 [2024-07-24 19:55:09.156608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.026 [2024-07-24 19:55:09.156633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.026 [2024-07-24 19:55:09.156648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.026 [2024-07-24 19:55:09.156661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.026 [2024-07-24 19:55:09.156689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.026 qpair failed and we were unable to recover it. 00:25:52.026 [2024-07-24 19:55:09.166474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.026 [2024-07-24 19:55:09.166590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.026 [2024-07-24 19:55:09.166616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.026 [2024-07-24 19:55:09.166631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.026 [2024-07-24 19:55:09.166644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.026 [2024-07-24 19:55:09.166672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.026 qpair failed and we were unable to recover it. 00:25:52.026 [2024-07-24 19:55:09.176443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.026 [2024-07-24 19:55:09.176540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.026 [2024-07-24 19:55:09.176565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.026 [2024-07-24 19:55:09.176579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.026 [2024-07-24 19:55:09.176592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.026 [2024-07-24 19:55:09.176620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.026 qpair failed and we were unable to recover it. 00:25:52.027 [2024-07-24 19:55:09.186495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.027 [2024-07-24 19:55:09.186622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.027 [2024-07-24 19:55:09.186647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.027 [2024-07-24 19:55:09.186661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.027 [2024-07-24 19:55:09.186674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.027 [2024-07-24 19:55:09.186702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.027 qpair failed and we were unable to recover it. 00:25:52.027 [2024-07-24 19:55:09.196522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.027 [2024-07-24 19:55:09.196649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.027 [2024-07-24 19:55:09.196674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.027 [2024-07-24 19:55:09.196688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.027 [2024-07-24 19:55:09.196701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.027 [2024-07-24 19:55:09.196729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.027 qpair failed and we were unable to recover it. 00:25:52.027 [2024-07-24 19:55:09.206584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.027 [2024-07-24 19:55:09.206730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.027 [2024-07-24 19:55:09.206755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.027 [2024-07-24 19:55:09.206769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.027 [2024-07-24 19:55:09.206781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.027 [2024-07-24 19:55:09.206809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.027 qpair failed and we were unable to recover it. 00:25:52.027 [2024-07-24 19:55:09.216575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.027 [2024-07-24 19:55:09.216686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.027 [2024-07-24 19:55:09.216710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.027 [2024-07-24 19:55:09.216724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.027 [2024-07-24 19:55:09.216737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.027 [2024-07-24 19:55:09.216765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.027 qpair failed and we were unable to recover it. 00:25:52.027 [2024-07-24 19:55:09.226630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.027 [2024-07-24 19:55:09.226734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.027 [2024-07-24 19:55:09.226759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.027 [2024-07-24 19:55:09.226779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.027 [2024-07-24 19:55:09.226793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.027 [2024-07-24 19:55:09.226822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.027 qpair failed and we were unable to recover it. 00:25:52.027 [2024-07-24 19:55:09.236616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.027 [2024-07-24 19:55:09.236730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.027 [2024-07-24 19:55:09.236756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.027 [2024-07-24 19:55:09.236771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.027 [2024-07-24 19:55:09.236784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.027 [2024-07-24 19:55:09.236813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.027 qpair failed and we were unable to recover it. 00:25:52.027 [2024-07-24 19:55:09.246689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.027 [2024-07-24 19:55:09.246797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.027 [2024-07-24 19:55:09.246823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.027 [2024-07-24 19:55:09.246837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.027 [2024-07-24 19:55:09.246850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.027 [2024-07-24 19:55:09.246878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.027 qpair failed and we were unable to recover it. 00:25:52.027 [2024-07-24 19:55:09.256737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.027 [2024-07-24 19:55:09.256854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.027 [2024-07-24 19:55:09.256880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.027 [2024-07-24 19:55:09.256894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.027 [2024-07-24 19:55:09.256907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.027 [2024-07-24 19:55:09.256935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.027 qpair failed and we were unable to recover it. 00:25:52.027 [2024-07-24 19:55:09.266713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.027 [2024-07-24 19:55:09.266817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.027 [2024-07-24 19:55:09.266842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.027 [2024-07-24 19:55:09.266857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.027 [2024-07-24 19:55:09.266870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.027 [2024-07-24 19:55:09.266898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.027 qpair failed and we were unable to recover it. 00:25:52.027 [2024-07-24 19:55:09.276743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.027 [2024-07-24 19:55:09.276850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.027 [2024-07-24 19:55:09.276877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.027 [2024-07-24 19:55:09.276891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.027 [2024-07-24 19:55:09.276904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.027 [2024-07-24 19:55:09.276933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.027 qpair failed and we were unable to recover it. 00:25:52.027 [2024-07-24 19:55:09.286798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.027 [2024-07-24 19:55:09.286906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.027 [2024-07-24 19:55:09.286932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.027 [2024-07-24 19:55:09.286946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.027 [2024-07-24 19:55:09.286961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.027 [2024-07-24 19:55:09.286990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.027 qpair failed and we were unable to recover it. 00:25:52.027 [2024-07-24 19:55:09.296808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.027 [2024-07-24 19:55:09.296910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.027 [2024-07-24 19:55:09.296936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.027 [2024-07-24 19:55:09.296951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.027 [2024-07-24 19:55:09.296964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.027 [2024-07-24 19:55:09.296992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.027 qpair failed and we were unable to recover it. 00:25:52.027 [2024-07-24 19:55:09.306873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.027 [2024-07-24 19:55:09.306986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.027 [2024-07-24 19:55:09.307011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.027 [2024-07-24 19:55:09.307026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.027 [2024-07-24 19:55:09.307039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.027 [2024-07-24 19:55:09.307067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.027 qpair failed and we were unable to recover it. 00:25:52.027 [2024-07-24 19:55:09.316850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.027 [2024-07-24 19:55:09.316966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.028 [2024-07-24 19:55:09.316997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.028 [2024-07-24 19:55:09.317013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.028 [2024-07-24 19:55:09.317026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.028 [2024-07-24 19:55:09.317055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.028 qpair failed and we were unable to recover it. 00:25:52.028 [2024-07-24 19:55:09.326863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.028 [2024-07-24 19:55:09.326964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.028 [2024-07-24 19:55:09.326989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.028 [2024-07-24 19:55:09.327004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.028 [2024-07-24 19:55:09.327017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.028 [2024-07-24 19:55:09.327045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.028 qpair failed and we were unable to recover it. 00:25:52.028 [2024-07-24 19:55:09.336919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.028 [2024-07-24 19:55:09.337031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.028 [2024-07-24 19:55:09.337058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.028 [2024-07-24 19:55:09.337072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.028 [2024-07-24 19:55:09.337089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.028 [2024-07-24 19:55:09.337119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.028 qpair failed and we were unable to recover it. 00:25:52.028 [2024-07-24 19:55:09.346955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.028 [2024-07-24 19:55:09.347064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.028 [2024-07-24 19:55:09.347091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.028 [2024-07-24 19:55:09.347106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.028 [2024-07-24 19:55:09.347119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.028 [2024-07-24 19:55:09.347148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.028 qpair failed and we were unable to recover it. 00:25:52.028 [2024-07-24 19:55:09.357004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.028 [2024-07-24 19:55:09.357117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.028 [2024-07-24 19:55:09.357144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.028 [2024-07-24 19:55:09.357158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.028 [2024-07-24 19:55:09.357172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.028 [2024-07-24 19:55:09.357200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.028 qpair failed and we were unable to recover it. 00:25:52.028 [2024-07-24 19:55:09.366992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.028 [2024-07-24 19:55:09.367092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.028 [2024-07-24 19:55:09.367118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.028 [2024-07-24 19:55:09.367132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.028 [2024-07-24 19:55:09.367145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.028 [2024-07-24 19:55:09.367173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.028 qpair failed and we were unable to recover it. 00:25:52.028 [2024-07-24 19:55:09.377042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.028 [2024-07-24 19:55:09.377182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.028 [2024-07-24 19:55:09.377207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.028 [2024-07-24 19:55:09.377222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.028 [2024-07-24 19:55:09.377235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.028 [2024-07-24 19:55:09.377271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.028 qpair failed and we were unable to recover it. 00:25:52.028 [2024-07-24 19:55:09.387086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.028 [2024-07-24 19:55:09.387196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.028 [2024-07-24 19:55:09.387221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.028 [2024-07-24 19:55:09.387236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.028 [2024-07-24 19:55:09.387258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.028 [2024-07-24 19:55:09.387288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.028 qpair failed and we were unable to recover it. 00:25:52.028 [2024-07-24 19:55:09.397085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.028 [2024-07-24 19:55:09.397186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.028 [2024-07-24 19:55:09.397212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.028 [2024-07-24 19:55:09.397227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.028 [2024-07-24 19:55:09.397240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.028 [2024-07-24 19:55:09.397278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.028 qpair failed and we were unable to recover it. 00:25:52.287 [2024-07-24 19:55:09.407145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.287 [2024-07-24 19:55:09.407255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.287 [2024-07-24 19:55:09.407285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.287 [2024-07-24 19:55:09.407301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.288 [2024-07-24 19:55:09.407314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.288 [2024-07-24 19:55:09.407343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.288 qpair failed and we were unable to recover it. 00:25:52.288 [2024-07-24 19:55:09.417122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.288 [2024-07-24 19:55:09.417226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.288 [2024-07-24 19:55:09.417259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.288 [2024-07-24 19:55:09.417275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.288 [2024-07-24 19:55:09.417288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.288 [2024-07-24 19:55:09.417316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.288 qpair failed and we were unable to recover it. 00:25:52.288 [2024-07-24 19:55:09.427218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.288 [2024-07-24 19:55:09.427355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.288 [2024-07-24 19:55:09.427380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.288 [2024-07-24 19:55:09.427394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.288 [2024-07-24 19:55:09.427407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.288 [2024-07-24 19:55:09.427436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.288 qpair failed and we were unable to recover it. 00:25:52.288 [2024-07-24 19:55:09.437204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.288 [2024-07-24 19:55:09.437321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.288 [2024-07-24 19:55:09.437347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.288 [2024-07-24 19:55:09.437361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.288 [2024-07-24 19:55:09.437374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.288 [2024-07-24 19:55:09.437402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.288 qpair failed and we were unable to recover it. 00:25:52.288 [2024-07-24 19:55:09.447210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.288 [2024-07-24 19:55:09.447360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.288 [2024-07-24 19:55:09.447386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.288 [2024-07-24 19:55:09.447401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.288 [2024-07-24 19:55:09.447414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.288 [2024-07-24 19:55:09.447451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.288 qpair failed and we were unable to recover it. 00:25:52.288 [2024-07-24 19:55:09.457265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.288 [2024-07-24 19:55:09.457374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.288 [2024-07-24 19:55:09.457401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.288 [2024-07-24 19:55:09.457415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.288 [2024-07-24 19:55:09.457432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.288 [2024-07-24 19:55:09.457463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.288 qpair failed and we were unable to recover it. 00:25:52.288 [2024-07-24 19:55:09.467331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.288 [2024-07-24 19:55:09.467440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.288 [2024-07-24 19:55:09.467466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.288 [2024-07-24 19:55:09.467481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.288 [2024-07-24 19:55:09.467494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.288 [2024-07-24 19:55:09.467522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.288 qpair failed and we were unable to recover it. 00:25:52.288 [2024-07-24 19:55:09.477321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.288 [2024-07-24 19:55:09.477424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.288 [2024-07-24 19:55:09.477449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.288 [2024-07-24 19:55:09.477464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.288 [2024-07-24 19:55:09.477476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.288 [2024-07-24 19:55:09.477505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.288 qpair failed and we were unable to recover it. 00:25:52.288 [2024-07-24 19:55:09.487337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.288 [2024-07-24 19:55:09.487453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.288 [2024-07-24 19:55:09.487479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.288 [2024-07-24 19:55:09.487493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.288 [2024-07-24 19:55:09.487506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.288 [2024-07-24 19:55:09.487534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.288 qpair failed and we were unable to recover it. 00:25:52.288 [2024-07-24 19:55:09.497360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.288 [2024-07-24 19:55:09.497511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.288 [2024-07-24 19:55:09.497542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.288 [2024-07-24 19:55:09.497558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.288 [2024-07-24 19:55:09.497571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.288 [2024-07-24 19:55:09.497599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.288 qpair failed and we were unable to recover it. 00:25:52.288 [2024-07-24 19:55:09.507446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.288 [2024-07-24 19:55:09.507576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.288 [2024-07-24 19:55:09.507602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.288 [2024-07-24 19:55:09.507616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.288 [2024-07-24 19:55:09.507629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.288 [2024-07-24 19:55:09.507657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.288 qpair failed and we were unable to recover it. 00:25:52.288 [2024-07-24 19:55:09.517415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.288 [2024-07-24 19:55:09.517524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.288 [2024-07-24 19:55:09.517550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.288 [2024-07-24 19:55:09.517565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.288 [2024-07-24 19:55:09.517578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.288 [2024-07-24 19:55:09.517607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.288 qpair failed and we were unable to recover it. 00:25:52.288 [2024-07-24 19:55:09.527478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.288 [2024-07-24 19:55:09.527598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.288 [2024-07-24 19:55:09.527623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.288 [2024-07-24 19:55:09.527637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.288 [2024-07-24 19:55:09.527650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.288 [2024-07-24 19:55:09.527678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.288 qpair failed and we were unable to recover it. 00:25:52.288 [2024-07-24 19:55:09.537474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.288 [2024-07-24 19:55:09.537580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.288 [2024-07-24 19:55:09.537605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.288 [2024-07-24 19:55:09.537619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.288 [2024-07-24 19:55:09.537632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.288 [2024-07-24 19:55:09.537669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.289 qpair failed and we were unable to recover it. 00:25:52.289 [2024-07-24 19:55:09.547510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.289 [2024-07-24 19:55:09.547619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.289 [2024-07-24 19:55:09.547644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.289 [2024-07-24 19:55:09.547658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.289 [2024-07-24 19:55:09.547671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.289 [2024-07-24 19:55:09.547699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.289 qpair failed and we were unable to recover it. 00:25:52.289 [2024-07-24 19:55:09.557534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.289 [2024-07-24 19:55:09.557647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.289 [2024-07-24 19:55:09.557672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.289 [2024-07-24 19:55:09.557687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.289 [2024-07-24 19:55:09.557700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.289 [2024-07-24 19:55:09.557729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.289 qpair failed and we were unable to recover it. 00:25:52.289 [2024-07-24 19:55:09.567580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.289 [2024-07-24 19:55:09.567680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.289 [2024-07-24 19:55:09.567705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.289 [2024-07-24 19:55:09.567720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.289 [2024-07-24 19:55:09.567733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.289 [2024-07-24 19:55:09.567762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.289 qpair failed and we were unable to recover it. 00:25:52.289 [2024-07-24 19:55:09.577564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.289 [2024-07-24 19:55:09.577667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.289 [2024-07-24 19:55:09.577692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.289 [2024-07-24 19:55:09.577706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.289 [2024-07-24 19:55:09.577719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.289 [2024-07-24 19:55:09.577748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.289 qpair failed and we were unable to recover it. 00:25:52.289 [2024-07-24 19:55:09.587622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.289 [2024-07-24 19:55:09.587730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.289 [2024-07-24 19:55:09.587759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.289 [2024-07-24 19:55:09.587775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.289 [2024-07-24 19:55:09.587788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.289 [2024-07-24 19:55:09.587816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.289 qpair failed and we were unable to recover it. 00:25:52.289 [2024-07-24 19:55:09.597645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.289 [2024-07-24 19:55:09.597749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.289 [2024-07-24 19:55:09.597774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.289 [2024-07-24 19:55:09.597789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.289 [2024-07-24 19:55:09.597802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.289 [2024-07-24 19:55:09.597831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.289 qpair failed and we were unable to recover it. 00:25:52.289 [2024-07-24 19:55:09.607696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.289 [2024-07-24 19:55:09.607826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.289 [2024-07-24 19:55:09.607851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.289 [2024-07-24 19:55:09.607865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.289 [2024-07-24 19:55:09.607878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.289 [2024-07-24 19:55:09.607906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.289 qpair failed and we were unable to recover it. 00:25:52.289 [2024-07-24 19:55:09.617675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.289 [2024-07-24 19:55:09.617782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.289 [2024-07-24 19:55:09.617807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.289 [2024-07-24 19:55:09.617821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.289 [2024-07-24 19:55:09.617834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.289 [2024-07-24 19:55:09.617862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.289 qpair failed and we were unable to recover it. 00:25:52.289 [2024-07-24 19:55:09.627788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.289 [2024-07-24 19:55:09.627902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.289 [2024-07-24 19:55:09.627927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.289 [2024-07-24 19:55:09.627941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.289 [2024-07-24 19:55:09.627953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.289 [2024-07-24 19:55:09.627988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.289 qpair failed and we were unable to recover it. 00:25:52.289 [2024-07-24 19:55:09.637795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.289 [2024-07-24 19:55:09.637910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.289 [2024-07-24 19:55:09.637935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.289 [2024-07-24 19:55:09.637949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.289 [2024-07-24 19:55:09.637962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.289 [2024-07-24 19:55:09.637990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.289 qpair failed and we were unable to recover it. 00:25:52.289 [2024-07-24 19:55:09.647834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.289 [2024-07-24 19:55:09.647955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.289 [2024-07-24 19:55:09.647981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.289 [2024-07-24 19:55:09.647995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.289 [2024-07-24 19:55:09.648008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.289 [2024-07-24 19:55:09.648036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.289 qpair failed and we were unable to recover it. 00:25:52.289 [2024-07-24 19:55:09.657835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.289 [2024-07-24 19:55:09.657942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.289 [2024-07-24 19:55:09.657968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.289 [2024-07-24 19:55:09.657982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.289 [2024-07-24 19:55:09.657995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.289 [2024-07-24 19:55:09.658023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.289 qpair failed and we were unable to recover it. 00:25:52.548 [2024-07-24 19:55:09.667885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.548 [2024-07-24 19:55:09.668030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.548 [2024-07-24 19:55:09.668056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.548 [2024-07-24 19:55:09.668070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.548 [2024-07-24 19:55:09.668083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.548 [2024-07-24 19:55:09.668116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.548 qpair failed and we were unable to recover it. 00:25:52.548 [2024-07-24 19:55:09.677875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.548 [2024-07-24 19:55:09.678005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.548 [2024-07-24 19:55:09.678036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.548 [2024-07-24 19:55:09.678051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.548 [2024-07-24 19:55:09.678064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.548 [2024-07-24 19:55:09.678092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.548 qpair failed and we were unable to recover it. 00:25:52.548 [2024-07-24 19:55:09.687916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.548 [2024-07-24 19:55:09.688044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.548 [2024-07-24 19:55:09.688069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.548 [2024-07-24 19:55:09.688084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.548 [2024-07-24 19:55:09.688097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.548 [2024-07-24 19:55:09.688125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.548 qpair failed and we were unable to recover it. 00:25:52.548 [2024-07-24 19:55:09.697932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.548 [2024-07-24 19:55:09.698035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.548 [2024-07-24 19:55:09.698061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.548 [2024-07-24 19:55:09.698075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.548 [2024-07-24 19:55:09.698089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.548 [2024-07-24 19:55:09.698117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.548 qpair failed and we were unable to recover it. 00:25:52.548 [2024-07-24 19:55:09.707969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.548 [2024-07-24 19:55:09.708078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.548 [2024-07-24 19:55:09.708104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.548 [2024-07-24 19:55:09.708118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.548 [2024-07-24 19:55:09.708131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.548 [2024-07-24 19:55:09.708159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.548 qpair failed and we were unable to recover it. 00:25:52.548 [2024-07-24 19:55:09.717978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.548 [2024-07-24 19:55:09.718082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.548 [2024-07-24 19:55:09.718107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.548 [2024-07-24 19:55:09.718122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.548 [2024-07-24 19:55:09.718140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.548 [2024-07-24 19:55:09.718172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.548 qpair failed and we were unable to recover it. 00:25:52.548 [2024-07-24 19:55:09.728001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.548 [2024-07-24 19:55:09.728111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.548 [2024-07-24 19:55:09.728137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.548 [2024-07-24 19:55:09.728153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.548 [2024-07-24 19:55:09.728166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.548 [2024-07-24 19:55:09.728194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.548 qpair failed and we were unable to recover it. 00:25:52.548 [2024-07-24 19:55:09.738032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.548 [2024-07-24 19:55:09.738133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.548 [2024-07-24 19:55:09.738158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.548 [2024-07-24 19:55:09.738172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.548 [2024-07-24 19:55:09.738185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.548 [2024-07-24 19:55:09.738214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.548 qpair failed and we were unable to recover it. 00:25:52.548 [2024-07-24 19:55:09.748091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.548 [2024-07-24 19:55:09.748200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.549 [2024-07-24 19:55:09.748224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.549 [2024-07-24 19:55:09.748239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.549 [2024-07-24 19:55:09.748260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.549 [2024-07-24 19:55:09.748289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.549 qpair failed and we were unable to recover it. 00:25:52.549 [2024-07-24 19:55:09.758110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.549 [2024-07-24 19:55:09.758238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.549 [2024-07-24 19:55:09.758273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.549 [2024-07-24 19:55:09.758288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.549 [2024-07-24 19:55:09.758301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.549 [2024-07-24 19:55:09.758330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.549 qpair failed and we were unable to recover it. 00:25:52.549 [2024-07-24 19:55:09.768126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.549 [2024-07-24 19:55:09.768235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.549 [2024-07-24 19:55:09.768267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.549 [2024-07-24 19:55:09.768281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.549 [2024-07-24 19:55:09.768295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.549 [2024-07-24 19:55:09.768323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.549 qpair failed and we were unable to recover it. 00:25:52.549 [2024-07-24 19:55:09.778187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.549 [2024-07-24 19:55:09.778310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.549 [2024-07-24 19:55:09.778336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.549 [2024-07-24 19:55:09.778351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.549 [2024-07-24 19:55:09.778363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.549 [2024-07-24 19:55:09.778391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.549 qpair failed and we were unable to recover it. 00:25:52.549 [2024-07-24 19:55:09.788223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.549 [2024-07-24 19:55:09.788342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.549 [2024-07-24 19:55:09.788367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.549 [2024-07-24 19:55:09.788381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.549 [2024-07-24 19:55:09.788394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.549 [2024-07-24 19:55:09.788422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.549 qpair failed and we were unable to recover it. 00:25:52.549 [2024-07-24 19:55:09.798230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.549 [2024-07-24 19:55:09.798347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.549 [2024-07-24 19:55:09.798372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.549 [2024-07-24 19:55:09.798386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.549 [2024-07-24 19:55:09.798399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.549 [2024-07-24 19:55:09.798428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.549 qpair failed and we were unable to recover it. 00:25:52.549 [2024-07-24 19:55:09.808249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.549 [2024-07-24 19:55:09.808359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.549 [2024-07-24 19:55:09.808384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.549 [2024-07-24 19:55:09.808398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.549 [2024-07-24 19:55:09.808417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.549 [2024-07-24 19:55:09.808446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.549 qpair failed and we were unable to recover it. 00:25:52.549 [2024-07-24 19:55:09.818286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.549 [2024-07-24 19:55:09.818391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.549 [2024-07-24 19:55:09.818417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.549 [2024-07-24 19:55:09.818431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.549 [2024-07-24 19:55:09.818444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.549 [2024-07-24 19:55:09.818472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.549 qpair failed and we were unable to recover it. 00:25:52.549 [2024-07-24 19:55:09.828336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.549 [2024-07-24 19:55:09.828468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.549 [2024-07-24 19:55:09.828493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.549 [2024-07-24 19:55:09.828507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.549 [2024-07-24 19:55:09.828520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.549 [2024-07-24 19:55:09.828548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.549 qpair failed and we were unable to recover it. 00:25:52.549 [2024-07-24 19:55:09.838385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.549 [2024-07-24 19:55:09.838526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.549 [2024-07-24 19:55:09.838551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.549 [2024-07-24 19:55:09.838565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.549 [2024-07-24 19:55:09.838579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.549 [2024-07-24 19:55:09.838607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.549 qpair failed and we were unable to recover it. 00:25:52.549 [2024-07-24 19:55:09.848341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.549 [2024-07-24 19:55:09.848441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.549 [2024-07-24 19:55:09.848467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.549 [2024-07-24 19:55:09.848481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.549 [2024-07-24 19:55:09.848494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.549 [2024-07-24 19:55:09.848522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.549 qpair failed and we were unable to recover it. 00:25:52.549 [2024-07-24 19:55:09.858397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.549 [2024-07-24 19:55:09.858521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.549 [2024-07-24 19:55:09.858546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.549 [2024-07-24 19:55:09.858560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.549 [2024-07-24 19:55:09.858572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.549 [2024-07-24 19:55:09.858600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.549 qpair failed and we were unable to recover it. 00:25:52.549 [2024-07-24 19:55:09.868409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.549 [2024-07-24 19:55:09.868515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.549 [2024-07-24 19:55:09.868540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.549 [2024-07-24 19:55:09.868555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.549 [2024-07-24 19:55:09.868568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.549 [2024-07-24 19:55:09.868596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.549 qpair failed and we were unable to recover it. 00:25:52.549 [2024-07-24 19:55:09.878473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.549 [2024-07-24 19:55:09.878588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.549 [2024-07-24 19:55:09.878614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.549 [2024-07-24 19:55:09.878628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.550 [2024-07-24 19:55:09.878641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.550 [2024-07-24 19:55:09.878671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.550 qpair failed and we were unable to recover it. 00:25:52.550 [2024-07-24 19:55:09.888488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.550 [2024-07-24 19:55:09.888588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.550 [2024-07-24 19:55:09.888613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.550 [2024-07-24 19:55:09.888627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.550 [2024-07-24 19:55:09.888640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.550 [2024-07-24 19:55:09.888669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.550 qpair failed and we were unable to recover it. 00:25:52.550 [2024-07-24 19:55:09.898528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.550 [2024-07-24 19:55:09.898632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.550 [2024-07-24 19:55:09.898658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.550 [2024-07-24 19:55:09.898673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.550 [2024-07-24 19:55:09.898691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.550 [2024-07-24 19:55:09.898720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.550 qpair failed and we were unable to recover it. 00:25:52.550 [2024-07-24 19:55:09.908551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.550 [2024-07-24 19:55:09.908656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.550 [2024-07-24 19:55:09.908680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.550 [2024-07-24 19:55:09.908695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.550 [2024-07-24 19:55:09.908708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.550 [2024-07-24 19:55:09.908737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.550 qpair failed and we were unable to recover it. 00:25:52.550 [2024-07-24 19:55:09.918573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.550 [2024-07-24 19:55:09.918678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.550 [2024-07-24 19:55:09.918704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.550 [2024-07-24 19:55:09.918718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.550 [2024-07-24 19:55:09.918731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.550 [2024-07-24 19:55:09.918759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.550 qpair failed and we were unable to recover it. 00:25:52.809 [2024-07-24 19:55:09.928573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.809 [2024-07-24 19:55:09.928726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.809 [2024-07-24 19:55:09.928751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.809 [2024-07-24 19:55:09.928766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.809 [2024-07-24 19:55:09.928778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.809 [2024-07-24 19:55:09.928806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.809 qpair failed and we were unable to recover it. 00:25:52.809 [2024-07-24 19:55:09.938603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.809 [2024-07-24 19:55:09.938707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.809 [2024-07-24 19:55:09.938732] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.809 [2024-07-24 19:55:09.938746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.809 [2024-07-24 19:55:09.938761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.809 [2024-07-24 19:55:09.938790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.809 qpair failed and we were unable to recover it. 00:25:52.809 [2024-07-24 19:55:09.948655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.809 [2024-07-24 19:55:09.948774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.809 [2024-07-24 19:55:09.948799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.809 [2024-07-24 19:55:09.948814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.809 [2024-07-24 19:55:09.948827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.809 [2024-07-24 19:55:09.948855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.809 qpair failed and we were unable to recover it. 00:25:52.809 [2024-07-24 19:55:09.958679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.809 [2024-07-24 19:55:09.958786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.809 [2024-07-24 19:55:09.958812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.809 [2024-07-24 19:55:09.958826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.809 [2024-07-24 19:55:09.958839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.809 [2024-07-24 19:55:09.958868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.809 qpair failed and we were unable to recover it. 00:25:52.809 [2024-07-24 19:55:09.968703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.809 [2024-07-24 19:55:09.968830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.809 [2024-07-24 19:55:09.968856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.809 [2024-07-24 19:55:09.968871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.809 [2024-07-24 19:55:09.968885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.809 [2024-07-24 19:55:09.968914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.809 qpair failed and we were unable to recover it. 00:25:52.809 [2024-07-24 19:55:09.978739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.809 [2024-07-24 19:55:09.978839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.809 [2024-07-24 19:55:09.978865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.809 [2024-07-24 19:55:09.978879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.809 [2024-07-24 19:55:09.978892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.809 [2024-07-24 19:55:09.978921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.809 qpair failed and we were unable to recover it. 00:25:52.809 [2024-07-24 19:55:09.988787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.809 [2024-07-24 19:55:09.988901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.809 [2024-07-24 19:55:09.988926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.809 [2024-07-24 19:55:09.988946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.809 [2024-07-24 19:55:09.988961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.809 [2024-07-24 19:55:09.988989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.809 qpair failed and we were unable to recover it. 00:25:52.809 [2024-07-24 19:55:09.998760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.809 [2024-07-24 19:55:09.998865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.809 [2024-07-24 19:55:09.998890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.809 [2024-07-24 19:55:09.998905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.809 [2024-07-24 19:55:09.998919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.809 [2024-07-24 19:55:09.998947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.809 qpair failed and we were unable to recover it. 00:25:52.809 [2024-07-24 19:55:10.008872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.809 [2024-07-24 19:55:10.009007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.809 [2024-07-24 19:55:10.009044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.809 [2024-07-24 19:55:10.009070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.809 [2024-07-24 19:55:10.009091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.810 [2024-07-24 19:55:10.009131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.810 qpair failed and we were unable to recover it. 00:25:52.810 [2024-07-24 19:55:10.018868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.810 [2024-07-24 19:55:10.018975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.810 [2024-07-24 19:55:10.019010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.810 [2024-07-24 19:55:10.019035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.810 [2024-07-24 19:55:10.019056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.810 [2024-07-24 19:55:10.019096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.810 qpair failed and we were unable to recover it. 00:25:52.810 [2024-07-24 19:55:10.028891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.810 [2024-07-24 19:55:10.029052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.810 [2024-07-24 19:55:10.029085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.810 [2024-07-24 19:55:10.029108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.810 [2024-07-24 19:55:10.029129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.810 [2024-07-24 19:55:10.029172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.810 qpair failed and we were unable to recover it. 00:25:52.810 [2024-07-24 19:55:10.038950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.810 [2024-07-24 19:55:10.039075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.810 [2024-07-24 19:55:10.039110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.810 [2024-07-24 19:55:10.039134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.810 [2024-07-24 19:55:10.039155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.810 [2024-07-24 19:55:10.039197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.810 qpair failed and we were unable to recover it. 00:25:52.810 [2024-07-24 19:55:10.048976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.810 [2024-07-24 19:55:10.049128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.810 [2024-07-24 19:55:10.049162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.810 [2024-07-24 19:55:10.049188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.810 [2024-07-24 19:55:10.049211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.810 [2024-07-24 19:55:10.049260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.810 qpair failed and we were unable to recover it. 00:25:52.810 [2024-07-24 19:55:10.058963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.810 [2024-07-24 19:55:10.059093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.810 [2024-07-24 19:55:10.059127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.810 [2024-07-24 19:55:10.059149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.810 [2024-07-24 19:55:10.059169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.810 [2024-07-24 19:55:10.059207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.810 qpair failed and we were unable to recover it. 00:25:52.810 [2024-07-24 19:55:10.069063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.810 [2024-07-24 19:55:10.069180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.810 [2024-07-24 19:55:10.069208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.810 [2024-07-24 19:55:10.069223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.810 [2024-07-24 19:55:10.069237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.810 [2024-07-24 19:55:10.069274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.810 qpair failed and we were unable to recover it. 00:25:52.810 [2024-07-24 19:55:10.079031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.810 [2024-07-24 19:55:10.079185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.810 [2024-07-24 19:55:10.079212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.810 [2024-07-24 19:55:10.079237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.810 [2024-07-24 19:55:10.079258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.810 [2024-07-24 19:55:10.079292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.810 qpair failed and we were unable to recover it. 00:25:52.810 [2024-07-24 19:55:10.089025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.810 [2024-07-24 19:55:10.089135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.810 [2024-07-24 19:55:10.089160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.810 [2024-07-24 19:55:10.089174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.810 [2024-07-24 19:55:10.089187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.810 [2024-07-24 19:55:10.089215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.810 qpair failed and we were unable to recover it. 00:25:52.810 [2024-07-24 19:55:10.099052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.810 [2024-07-24 19:55:10.099154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.810 [2024-07-24 19:55:10.099180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.810 [2024-07-24 19:55:10.099195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.810 [2024-07-24 19:55:10.099208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.810 [2024-07-24 19:55:10.099237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.810 qpair failed and we were unable to recover it. 00:25:52.810 [2024-07-24 19:55:10.109181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.810 [2024-07-24 19:55:10.109295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.810 [2024-07-24 19:55:10.109320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.810 [2024-07-24 19:55:10.109334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.810 [2024-07-24 19:55:10.109346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.810 [2024-07-24 19:55:10.109376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.810 qpair failed and we were unable to recover it. 00:25:52.810 [2024-07-24 19:55:10.119134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.810 [2024-07-24 19:55:10.119285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.810 [2024-07-24 19:55:10.119311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.810 [2024-07-24 19:55:10.119325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.810 [2024-07-24 19:55:10.119339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.810 [2024-07-24 19:55:10.119368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.810 qpair failed and we were unable to recover it. 00:25:52.810 [2024-07-24 19:55:10.129149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.810 [2024-07-24 19:55:10.129260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.810 [2024-07-24 19:55:10.129285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.810 [2024-07-24 19:55:10.129300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.810 [2024-07-24 19:55:10.129313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.810 [2024-07-24 19:55:10.129342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.810 qpair failed and we were unable to recover it. 00:25:52.810 [2024-07-24 19:55:10.139160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.810 [2024-07-24 19:55:10.139274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.810 [2024-07-24 19:55:10.139300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.811 [2024-07-24 19:55:10.139314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.811 [2024-07-24 19:55:10.139327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.811 [2024-07-24 19:55:10.139356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.811 qpair failed and we were unable to recover it. 00:25:52.811 [2024-07-24 19:55:10.149273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.811 [2024-07-24 19:55:10.149378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.811 [2024-07-24 19:55:10.149403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.811 [2024-07-24 19:55:10.149418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.811 [2024-07-24 19:55:10.149431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.811 [2024-07-24 19:55:10.149459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.811 qpair failed and we were unable to recover it. 00:25:52.811 [2024-07-24 19:55:10.159256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.811 [2024-07-24 19:55:10.159364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.811 [2024-07-24 19:55:10.159389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.811 [2024-07-24 19:55:10.159404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.811 [2024-07-24 19:55:10.159417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.811 [2024-07-24 19:55:10.159446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.811 qpair failed and we were unable to recover it. 00:25:52.811 [2024-07-24 19:55:10.169286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.811 [2024-07-24 19:55:10.169384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.811 [2024-07-24 19:55:10.169409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.811 [2024-07-24 19:55:10.169430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.811 [2024-07-24 19:55:10.169444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.811 [2024-07-24 19:55:10.169472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.811 qpair failed and we were unable to recover it. 00:25:52.811 [2024-07-24 19:55:10.179299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:52.811 [2024-07-24 19:55:10.179445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:52.811 [2024-07-24 19:55:10.179471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:52.811 [2024-07-24 19:55:10.179485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:52.811 [2024-07-24 19:55:10.179498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:52.811 [2024-07-24 19:55:10.179527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:52.811 qpair failed and we were unable to recover it. 00:25:53.070 [2024-07-24 19:55:10.189316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.070 [2024-07-24 19:55:10.189424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.070 [2024-07-24 19:55:10.189450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.070 [2024-07-24 19:55:10.189464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.070 [2024-07-24 19:55:10.189477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.070 [2024-07-24 19:55:10.189506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.070 qpair failed and we were unable to recover it. 00:25:53.070 [2024-07-24 19:55:10.199338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.070 [2024-07-24 19:55:10.199441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.070 [2024-07-24 19:55:10.199466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.070 [2024-07-24 19:55:10.199480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.070 [2024-07-24 19:55:10.199493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.070 [2024-07-24 19:55:10.199522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.070 qpair failed and we were unable to recover it. 00:25:53.070 [2024-07-24 19:55:10.209399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.070 [2024-07-24 19:55:10.209509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.070 [2024-07-24 19:55:10.209534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.070 [2024-07-24 19:55:10.209548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.070 [2024-07-24 19:55:10.209561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.070 [2024-07-24 19:55:10.209590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.070 qpair failed and we were unable to recover it. 00:25:53.070 [2024-07-24 19:55:10.219439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.070 [2024-07-24 19:55:10.219565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.070 [2024-07-24 19:55:10.219592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.070 [2024-07-24 19:55:10.219607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.070 [2024-07-24 19:55:10.219620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.070 [2024-07-24 19:55:10.219649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.070 qpair failed and we were unable to recover it. 00:25:53.070 [2024-07-24 19:55:10.229523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.070 [2024-07-24 19:55:10.229635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.070 [2024-07-24 19:55:10.229660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.070 [2024-07-24 19:55:10.229675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.070 [2024-07-24 19:55:10.229688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.070 [2024-07-24 19:55:10.229716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.070 qpair failed and we were unable to recover it. 00:25:53.070 [2024-07-24 19:55:10.239460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.070 [2024-07-24 19:55:10.239560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.070 [2024-07-24 19:55:10.239586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.070 [2024-07-24 19:55:10.239600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.070 [2024-07-24 19:55:10.239613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.070 [2024-07-24 19:55:10.239641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.070 qpair failed and we were unable to recover it. 00:25:53.070 [2024-07-24 19:55:10.249477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.070 [2024-07-24 19:55:10.249583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.070 [2024-07-24 19:55:10.249609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.070 [2024-07-24 19:55:10.249623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.070 [2024-07-24 19:55:10.249635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.070 [2024-07-24 19:55:10.249663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.070 qpair failed and we were unable to recover it. 00:25:53.070 [2024-07-24 19:55:10.259550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.070 [2024-07-24 19:55:10.259677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.071 [2024-07-24 19:55:10.259709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.071 [2024-07-24 19:55:10.259729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.071 [2024-07-24 19:55:10.259744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.071 [2024-07-24 19:55:10.259774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.071 qpair failed and we were unable to recover it. 00:25:53.071 [2024-07-24 19:55:10.269550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.071 [2024-07-24 19:55:10.269667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.071 [2024-07-24 19:55:10.269692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.071 [2024-07-24 19:55:10.269706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.071 [2024-07-24 19:55:10.269719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.071 [2024-07-24 19:55:10.269747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.071 qpair failed and we were unable to recover it. 00:25:53.071 [2024-07-24 19:55:10.279565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.071 [2024-07-24 19:55:10.279671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.071 [2024-07-24 19:55:10.279697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.071 [2024-07-24 19:55:10.279712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.071 [2024-07-24 19:55:10.279724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.071 [2024-07-24 19:55:10.279752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.071 qpair failed and we were unable to recover it. 00:25:53.071 [2024-07-24 19:55:10.289613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.071 [2024-07-24 19:55:10.289752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.071 [2024-07-24 19:55:10.289778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.071 [2024-07-24 19:55:10.289792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.071 [2024-07-24 19:55:10.289806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.071 [2024-07-24 19:55:10.289834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.071 qpair failed and we were unable to recover it. 00:25:53.071 [2024-07-24 19:55:10.299636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.071 [2024-07-24 19:55:10.299768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.071 [2024-07-24 19:55:10.299793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.071 [2024-07-24 19:55:10.299808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.071 [2024-07-24 19:55:10.299821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.071 [2024-07-24 19:55:10.299849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.071 qpair failed and we were unable to recover it. 00:25:53.071 [2024-07-24 19:55:10.309663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.071 [2024-07-24 19:55:10.309795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.071 [2024-07-24 19:55:10.309821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.071 [2024-07-24 19:55:10.309838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.071 [2024-07-24 19:55:10.309851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.071 [2024-07-24 19:55:10.309881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.071 qpair failed and we were unable to recover it. 00:25:53.071 [2024-07-24 19:55:10.319720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.071 [2024-07-24 19:55:10.319844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.071 [2024-07-24 19:55:10.319870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.071 [2024-07-24 19:55:10.319885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.071 [2024-07-24 19:55:10.319899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.071 [2024-07-24 19:55:10.319927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.071 qpair failed and we were unable to recover it. 00:25:53.071 [2024-07-24 19:55:10.329725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.071 [2024-07-24 19:55:10.329826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.071 [2024-07-24 19:55:10.329852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.071 [2024-07-24 19:55:10.329867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.071 [2024-07-24 19:55:10.329879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.071 [2024-07-24 19:55:10.329907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.071 qpair failed and we were unable to recover it. 00:25:53.071 [2024-07-24 19:55:10.339727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.071 [2024-07-24 19:55:10.339879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.071 [2024-07-24 19:55:10.339904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.071 [2024-07-24 19:55:10.339919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.071 [2024-07-24 19:55:10.339930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.071 [2024-07-24 19:55:10.339959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.071 qpair failed and we were unable to recover it. 00:25:53.071 [2024-07-24 19:55:10.349791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.071 [2024-07-24 19:55:10.349896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.071 [2024-07-24 19:55:10.349927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.071 [2024-07-24 19:55:10.349943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.071 [2024-07-24 19:55:10.349956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.071 [2024-07-24 19:55:10.349984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.071 qpair failed and we were unable to recover it. 00:25:53.071 [2024-07-24 19:55:10.359781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.071 [2024-07-24 19:55:10.359891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.071 [2024-07-24 19:55:10.359916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.071 [2024-07-24 19:55:10.359930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.071 [2024-07-24 19:55:10.359943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.071 [2024-07-24 19:55:10.359972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.071 qpair failed and we were unable to recover it. 00:25:53.071 [2024-07-24 19:55:10.369811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.071 [2024-07-24 19:55:10.369961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.071 [2024-07-24 19:55:10.369987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.071 [2024-07-24 19:55:10.370001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.071 [2024-07-24 19:55:10.370014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.071 [2024-07-24 19:55:10.370042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.071 qpair failed and we were unable to recover it. 00:25:53.071 [2024-07-24 19:55:10.379849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.071 [2024-07-24 19:55:10.379975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.071 [2024-07-24 19:55:10.380001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.072 [2024-07-24 19:55:10.380015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.072 [2024-07-24 19:55:10.380028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.072 [2024-07-24 19:55:10.380056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.072 qpair failed and we were unable to recover it. 00:25:53.072 [2024-07-24 19:55:10.389893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.072 [2024-07-24 19:55:10.390002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.072 [2024-07-24 19:55:10.390027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.072 [2024-07-24 19:55:10.390041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.072 [2024-07-24 19:55:10.390054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.072 [2024-07-24 19:55:10.390088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.072 qpair failed and we were unable to recover it. 00:25:53.072 [2024-07-24 19:55:10.399895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.072 [2024-07-24 19:55:10.399999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.072 [2024-07-24 19:55:10.400025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.072 [2024-07-24 19:55:10.400040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.072 [2024-07-24 19:55:10.400053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.072 [2024-07-24 19:55:10.400082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.072 qpair failed and we were unable to recover it. 00:25:53.072 [2024-07-24 19:55:10.409970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.072 [2024-07-24 19:55:10.410090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.072 [2024-07-24 19:55:10.410115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.072 [2024-07-24 19:55:10.410129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.072 [2024-07-24 19:55:10.410143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.072 [2024-07-24 19:55:10.410171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.072 qpair failed and we were unable to recover it. 00:25:53.072 [2024-07-24 19:55:10.420023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.072 [2024-07-24 19:55:10.420134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.072 [2024-07-24 19:55:10.420160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.072 [2024-07-24 19:55:10.420174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.072 [2024-07-24 19:55:10.420188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.072 [2024-07-24 19:55:10.420216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.072 qpair failed and we were unable to recover it. 00:25:53.072 [2024-07-24 19:55:10.430017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.072 [2024-07-24 19:55:10.430137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.072 [2024-07-24 19:55:10.430163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.072 [2024-07-24 19:55:10.430178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.072 [2024-07-24 19:55:10.430191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.072 [2024-07-24 19:55:10.430220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.072 qpair failed and we were unable to recover it. 00:25:53.072 [2024-07-24 19:55:10.440076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.072 [2024-07-24 19:55:10.440182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.072 [2024-07-24 19:55:10.440213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.072 [2024-07-24 19:55:10.440229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.072 [2024-07-24 19:55:10.440253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.072 [2024-07-24 19:55:10.440284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.072 qpair failed and we were unable to recover it. 00:25:53.331 [2024-07-24 19:55:10.450137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.331 [2024-07-24 19:55:10.450264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.331 [2024-07-24 19:55:10.450290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.331 [2024-07-24 19:55:10.450304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.331 [2024-07-24 19:55:10.450319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.332 [2024-07-24 19:55:10.450347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.332 qpair failed and we were unable to recover it. 00:25:53.332 [2024-07-24 19:55:10.460153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.332 [2024-07-24 19:55:10.460271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.332 [2024-07-24 19:55:10.460298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.332 [2024-07-24 19:55:10.460313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.332 [2024-07-24 19:55:10.460327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.332 [2024-07-24 19:55:10.460355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.332 qpair failed and we were unable to recover it. 00:25:53.332 [2024-07-24 19:55:10.470164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.332 [2024-07-24 19:55:10.470292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.332 [2024-07-24 19:55:10.470318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.332 [2024-07-24 19:55:10.470332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.332 [2024-07-24 19:55:10.470346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.332 [2024-07-24 19:55:10.470374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.332 qpair failed and we were unable to recover it. 00:25:53.332 [2024-07-24 19:55:10.480161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.332 [2024-07-24 19:55:10.480295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.332 [2024-07-24 19:55:10.480321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.332 [2024-07-24 19:55:10.480336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.332 [2024-07-24 19:55:10.480349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.332 [2024-07-24 19:55:10.480384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.332 qpair failed and we were unable to recover it. 00:25:53.332 [2024-07-24 19:55:10.490191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.332 [2024-07-24 19:55:10.490311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.332 [2024-07-24 19:55:10.490337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.332 [2024-07-24 19:55:10.490351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.332 [2024-07-24 19:55:10.490364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.332 [2024-07-24 19:55:10.490393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.332 qpair failed and we were unable to recover it. 00:25:53.332 [2024-07-24 19:55:10.500266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.332 [2024-07-24 19:55:10.500380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.332 [2024-07-24 19:55:10.500406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.332 [2024-07-24 19:55:10.500421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.332 [2024-07-24 19:55:10.500435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.332 [2024-07-24 19:55:10.500464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.332 qpair failed and we were unable to recover it. 00:25:53.332 [2024-07-24 19:55:10.510291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.332 [2024-07-24 19:55:10.510447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.332 [2024-07-24 19:55:10.510473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.332 [2024-07-24 19:55:10.510487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.332 [2024-07-24 19:55:10.510499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.332 [2024-07-24 19:55:10.510528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.332 qpair failed and we were unable to recover it. 00:25:53.332 [2024-07-24 19:55:10.520303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.332 [2024-07-24 19:55:10.520416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.332 [2024-07-24 19:55:10.520441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.332 [2024-07-24 19:55:10.520456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.332 [2024-07-24 19:55:10.520469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.332 [2024-07-24 19:55:10.520498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.332 qpair failed and we were unable to recover it. 00:25:53.332 [2024-07-24 19:55:10.530312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.332 [2024-07-24 19:55:10.530416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.332 [2024-07-24 19:55:10.530447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.332 [2024-07-24 19:55:10.530462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.332 [2024-07-24 19:55:10.530475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.332 [2024-07-24 19:55:10.530504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.332 qpair failed and we were unable to recover it. 00:25:53.332 [2024-07-24 19:55:10.540348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.332 [2024-07-24 19:55:10.540501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.332 [2024-07-24 19:55:10.540526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.332 [2024-07-24 19:55:10.540541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.332 [2024-07-24 19:55:10.540554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.332 [2024-07-24 19:55:10.540583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.332 qpair failed and we were unable to recover it. 00:25:53.332 [2024-07-24 19:55:10.550397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.332 [2024-07-24 19:55:10.550507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.332 [2024-07-24 19:55:10.550532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.332 [2024-07-24 19:55:10.550546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.332 [2024-07-24 19:55:10.550559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.332 [2024-07-24 19:55:10.550588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.332 qpair failed and we were unable to recover it. 00:25:53.332 [2024-07-24 19:55:10.560468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.332 [2024-07-24 19:55:10.560582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.332 [2024-07-24 19:55:10.560607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.332 [2024-07-24 19:55:10.560622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.332 [2024-07-24 19:55:10.560635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.332 [2024-07-24 19:55:10.560663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.332 qpair failed and we were unable to recover it. 00:25:53.332 [2024-07-24 19:55:10.570480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.332 [2024-07-24 19:55:10.570587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.333 [2024-07-24 19:55:10.570613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.333 [2024-07-24 19:55:10.570627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.333 [2024-07-24 19:55:10.570640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.333 [2024-07-24 19:55:10.570674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.333 qpair failed and we were unable to recover it. 00:25:53.333 [2024-07-24 19:55:10.580475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.333 [2024-07-24 19:55:10.580578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.333 [2024-07-24 19:55:10.580604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.333 [2024-07-24 19:55:10.580618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.333 [2024-07-24 19:55:10.580631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.333 [2024-07-24 19:55:10.580659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.333 qpair failed and we were unable to recover it. 00:25:53.333 [2024-07-24 19:55:10.590580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.333 [2024-07-24 19:55:10.590721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.333 [2024-07-24 19:55:10.590746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.333 [2024-07-24 19:55:10.590761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.333 [2024-07-24 19:55:10.590774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.333 [2024-07-24 19:55:10.590801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.333 qpair failed and we were unable to recover it. 00:25:53.333 [2024-07-24 19:55:10.600562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.333 [2024-07-24 19:55:10.600691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.333 [2024-07-24 19:55:10.600716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.333 [2024-07-24 19:55:10.600730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.333 [2024-07-24 19:55:10.600743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.333 [2024-07-24 19:55:10.600772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.333 qpair failed and we were unable to recover it. 00:25:53.333 [2024-07-24 19:55:10.610573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.333 [2024-07-24 19:55:10.610681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.333 [2024-07-24 19:55:10.610706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.333 [2024-07-24 19:55:10.610721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.333 [2024-07-24 19:55:10.610733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.333 [2024-07-24 19:55:10.610762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.333 qpair failed and we were unable to recover it. 00:25:53.333 [2024-07-24 19:55:10.620613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.333 [2024-07-24 19:55:10.620762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.333 [2024-07-24 19:55:10.620792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.333 [2024-07-24 19:55:10.620807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.333 [2024-07-24 19:55:10.620820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.333 [2024-07-24 19:55:10.620849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.333 qpair failed and we were unable to recover it. 00:25:53.333 [2024-07-24 19:55:10.630672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.333 [2024-07-24 19:55:10.630789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.333 [2024-07-24 19:55:10.630814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.333 [2024-07-24 19:55:10.630828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.333 [2024-07-24 19:55:10.630841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.333 [2024-07-24 19:55:10.630869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.333 qpair failed and we were unable to recover it. 00:25:53.333 [2024-07-24 19:55:10.640664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.333 [2024-07-24 19:55:10.640786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.333 [2024-07-24 19:55:10.640811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.333 [2024-07-24 19:55:10.640826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.333 [2024-07-24 19:55:10.640838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.333 [2024-07-24 19:55:10.640867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.333 qpair failed and we were unable to recover it. 00:25:53.333 [2024-07-24 19:55:10.650676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.333 [2024-07-24 19:55:10.650779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.333 [2024-07-24 19:55:10.650804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.333 [2024-07-24 19:55:10.650819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.333 [2024-07-24 19:55:10.650831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.333 [2024-07-24 19:55:10.650859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.333 qpair failed and we were unable to recover it. 00:25:53.333 [2024-07-24 19:55:10.660709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.333 [2024-07-24 19:55:10.660807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.333 [2024-07-24 19:55:10.660832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.333 [2024-07-24 19:55:10.660846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.333 [2024-07-24 19:55:10.660864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.333 [2024-07-24 19:55:10.660894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.333 qpair failed and we were unable to recover it. 00:25:53.333 [2024-07-24 19:55:10.670735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.333 [2024-07-24 19:55:10.670838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.333 [2024-07-24 19:55:10.670864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.333 [2024-07-24 19:55:10.670878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.333 [2024-07-24 19:55:10.670890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.333 [2024-07-24 19:55:10.670918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.333 qpair failed and we were unable to recover it. 00:25:53.333 [2024-07-24 19:55:10.680792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.333 [2024-07-24 19:55:10.680928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.333 [2024-07-24 19:55:10.680954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.333 [2024-07-24 19:55:10.680968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.333 [2024-07-24 19:55:10.680981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.333 [2024-07-24 19:55:10.681009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.333 qpair failed and we were unable to recover it. 00:25:53.333 [2024-07-24 19:55:10.690844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.333 [2024-07-24 19:55:10.690958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.333 [2024-07-24 19:55:10.690985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.333 [2024-07-24 19:55:10.691004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.333 [2024-07-24 19:55:10.691018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.334 [2024-07-24 19:55:10.691048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.334 qpair failed and we were unable to recover it. 00:25:53.334 [2024-07-24 19:55:10.700810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.334 [2024-07-24 19:55:10.700914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.334 [2024-07-24 19:55:10.700939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.334 [2024-07-24 19:55:10.700954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.334 [2024-07-24 19:55:10.700967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.334 [2024-07-24 19:55:10.700995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.334 qpair failed and we were unable to recover it. 00:25:53.592 [2024-07-24 19:55:10.710910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.592 [2024-07-24 19:55:10.711053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.592 [2024-07-24 19:55:10.711079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.592 [2024-07-24 19:55:10.711094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.592 [2024-07-24 19:55:10.711107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.592 [2024-07-24 19:55:10.711137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.592 qpair failed and we were unable to recover it. 00:25:53.592 [2024-07-24 19:55:10.720900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.592 [2024-07-24 19:55:10.721008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.592 [2024-07-24 19:55:10.721034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.592 [2024-07-24 19:55:10.721049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.592 [2024-07-24 19:55:10.721061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.592 [2024-07-24 19:55:10.721090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.592 qpair failed and we were unable to recover it. 00:25:53.592 [2024-07-24 19:55:10.730912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.592 [2024-07-24 19:55:10.731015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.593 [2024-07-24 19:55:10.731042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.593 [2024-07-24 19:55:10.731057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.593 [2024-07-24 19:55:10.731070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.593 [2024-07-24 19:55:10.731098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.593 qpair failed and we were unable to recover it. 00:25:53.593 [2024-07-24 19:55:10.740910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.593 [2024-07-24 19:55:10.741009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.593 [2024-07-24 19:55:10.741034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.593 [2024-07-24 19:55:10.741049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.593 [2024-07-24 19:55:10.741061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.593 [2024-07-24 19:55:10.741090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.593 qpair failed and we were unable to recover it. 00:25:53.593 [2024-07-24 19:55:10.750984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.593 [2024-07-24 19:55:10.751088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.593 [2024-07-24 19:55:10.751113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.593 [2024-07-24 19:55:10.751127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.593 [2024-07-24 19:55:10.751145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.593 [2024-07-24 19:55:10.751174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.593 qpair failed and we were unable to recover it. 00:25:53.593 [2024-07-24 19:55:10.760989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.593 [2024-07-24 19:55:10.761099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.593 [2024-07-24 19:55:10.761125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.593 [2024-07-24 19:55:10.761140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.593 [2024-07-24 19:55:10.761153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.593 [2024-07-24 19:55:10.761181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.593 qpair failed and we were unable to recover it. 00:25:53.593 [2024-07-24 19:55:10.771023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.593 [2024-07-24 19:55:10.771151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.593 [2024-07-24 19:55:10.771176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.593 [2024-07-24 19:55:10.771191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.593 [2024-07-24 19:55:10.771204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.593 [2024-07-24 19:55:10.771232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.593 qpair failed and we were unable to recover it. 00:25:53.593 [2024-07-24 19:55:10.781058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.593 [2024-07-24 19:55:10.781165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.593 [2024-07-24 19:55:10.781190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.593 [2024-07-24 19:55:10.781205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.593 [2024-07-24 19:55:10.781218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.593 [2024-07-24 19:55:10.781252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.593 qpair failed and we were unable to recover it. 00:25:53.593 [2024-07-24 19:55:10.791064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.593 [2024-07-24 19:55:10.791173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.593 [2024-07-24 19:55:10.791198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.593 [2024-07-24 19:55:10.791213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.593 [2024-07-24 19:55:10.791225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.593 [2024-07-24 19:55:10.791260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.593 qpair failed and we were unable to recover it. 00:25:53.593 [2024-07-24 19:55:10.801112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.593 [2024-07-24 19:55:10.801217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.593 [2024-07-24 19:55:10.801248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.593 [2024-07-24 19:55:10.801265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.593 [2024-07-24 19:55:10.801279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.593 [2024-07-24 19:55:10.801307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.593 qpair failed and we were unable to recover it. 00:25:53.593 [2024-07-24 19:55:10.811140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.593 [2024-07-24 19:55:10.811250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.593 [2024-07-24 19:55:10.811276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.593 [2024-07-24 19:55:10.811290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.593 [2024-07-24 19:55:10.811304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.593 [2024-07-24 19:55:10.811332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.593 qpair failed and we were unable to recover it. 00:25:53.593 [2024-07-24 19:55:10.821187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.593 [2024-07-24 19:55:10.821297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.593 [2024-07-24 19:55:10.821324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.593 [2024-07-24 19:55:10.821338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.593 [2024-07-24 19:55:10.821351] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.593 [2024-07-24 19:55:10.821381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.593 qpair failed and we were unable to recover it. 00:25:53.593 [2024-07-24 19:55:10.831236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.593 [2024-07-24 19:55:10.831391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.593 [2024-07-24 19:55:10.831417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.593 [2024-07-24 19:55:10.831431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.593 [2024-07-24 19:55:10.831444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.593 [2024-07-24 19:55:10.831472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.593 qpair failed and we were unable to recover it. 00:25:53.593 [2024-07-24 19:55:10.841201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.593 [2024-07-24 19:55:10.841313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.593 [2024-07-24 19:55:10.841339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.593 [2024-07-24 19:55:10.841354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.593 [2024-07-24 19:55:10.841372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.593 [2024-07-24 19:55:10.841402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.593 qpair failed and we were unable to recover it. 00:25:53.593 [2024-07-24 19:55:10.851302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.593 [2024-07-24 19:55:10.851462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.593 [2024-07-24 19:55:10.851487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.593 [2024-07-24 19:55:10.851501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.593 [2024-07-24 19:55:10.851514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.593 [2024-07-24 19:55:10.851542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.593 qpair failed and we were unable to recover it. 00:25:53.593 [2024-07-24 19:55:10.861322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.593 [2024-07-24 19:55:10.861469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.594 [2024-07-24 19:55:10.861494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.594 [2024-07-24 19:55:10.861508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.594 [2024-07-24 19:55:10.861521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.594 [2024-07-24 19:55:10.861550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.594 qpair failed and we were unable to recover it. 00:25:53.594 [2024-07-24 19:55:10.871306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.594 [2024-07-24 19:55:10.871446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.594 [2024-07-24 19:55:10.871471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.594 [2024-07-24 19:55:10.871485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.594 [2024-07-24 19:55:10.871500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.594 [2024-07-24 19:55:10.871529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.594 qpair failed and we were unable to recover it. 00:25:53.594 [2024-07-24 19:55:10.881345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.594 [2024-07-24 19:55:10.881473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.594 [2024-07-24 19:55:10.881500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.594 [2024-07-24 19:55:10.881515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.594 [2024-07-24 19:55:10.881529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.594 [2024-07-24 19:55:10.881558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.594 qpair failed and we were unable to recover it. 00:25:53.594 [2024-07-24 19:55:10.891386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.594 [2024-07-24 19:55:10.891514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.594 [2024-07-24 19:55:10.891541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.594 [2024-07-24 19:55:10.891556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.594 [2024-07-24 19:55:10.891569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.594 [2024-07-24 19:55:10.891598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.594 qpair failed and we were unable to recover it. 00:25:53.594 [2024-07-24 19:55:10.901389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.594 [2024-07-24 19:55:10.901521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.594 [2024-07-24 19:55:10.901548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.594 [2024-07-24 19:55:10.901562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.594 [2024-07-24 19:55:10.901575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.594 [2024-07-24 19:55:10.901603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.594 qpair failed and we were unable to recover it. 00:25:53.594 [2024-07-24 19:55:10.911408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.594 [2024-07-24 19:55:10.911515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.594 [2024-07-24 19:55:10.911540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.594 [2024-07-24 19:55:10.911554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.594 [2024-07-24 19:55:10.911567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.594 [2024-07-24 19:55:10.911595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.594 qpair failed and we were unable to recover it. 00:25:53.594 [2024-07-24 19:55:10.921486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.594 [2024-07-24 19:55:10.921635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.594 [2024-07-24 19:55:10.921660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.594 [2024-07-24 19:55:10.921674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.594 [2024-07-24 19:55:10.921687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.594 [2024-07-24 19:55:10.921715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.594 qpair failed and we were unable to recover it. 00:25:53.594 [2024-07-24 19:55:10.931495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.594 [2024-07-24 19:55:10.931598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.594 [2024-07-24 19:55:10.931624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.594 [2024-07-24 19:55:10.931644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.594 [2024-07-24 19:55:10.931657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.594 [2024-07-24 19:55:10.931686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.594 qpair failed and we were unable to recover it. 00:25:53.594 [2024-07-24 19:55:10.941489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.594 [2024-07-24 19:55:10.941591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.594 [2024-07-24 19:55:10.941616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.594 [2024-07-24 19:55:10.941631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.594 [2024-07-24 19:55:10.941644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.594 [2024-07-24 19:55:10.941672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.594 qpair failed and we were unable to recover it. 00:25:53.594 [2024-07-24 19:55:10.951514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.594 [2024-07-24 19:55:10.951619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.594 [2024-07-24 19:55:10.951644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.594 [2024-07-24 19:55:10.951658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.594 [2024-07-24 19:55:10.951671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.594 [2024-07-24 19:55:10.951698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.594 qpair failed and we were unable to recover it. 00:25:53.594 [2024-07-24 19:55:10.961564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.594 [2024-07-24 19:55:10.961675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.594 [2024-07-24 19:55:10.961700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.594 [2024-07-24 19:55:10.961714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.594 [2024-07-24 19:55:10.961728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.594 [2024-07-24 19:55:10.961756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.594 qpair failed and we were unable to recover it. 00:25:53.854 [2024-07-24 19:55:10.971599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.854 [2024-07-24 19:55:10.971713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.854 [2024-07-24 19:55:10.971738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.854 [2024-07-24 19:55:10.971752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.854 [2024-07-24 19:55:10.971765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.854 [2024-07-24 19:55:10.971796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.854 qpair failed and we were unable to recover it. 00:25:53.854 [2024-07-24 19:55:10.981605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.854 [2024-07-24 19:55:10.981707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.854 [2024-07-24 19:55:10.981733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.854 [2024-07-24 19:55:10.981747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.854 [2024-07-24 19:55:10.981760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.854 [2024-07-24 19:55:10.981789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.854 qpair failed and we were unable to recover it. 00:25:53.854 [2024-07-24 19:55:10.991641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.854 [2024-07-24 19:55:10.991748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.854 [2024-07-24 19:55:10.991773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.854 [2024-07-24 19:55:10.991788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.854 [2024-07-24 19:55:10.991800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.854 [2024-07-24 19:55:10.991828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.854 qpair failed and we were unable to recover it. 00:25:53.854 [2024-07-24 19:55:11.001687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.854 [2024-07-24 19:55:11.001802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.854 [2024-07-24 19:55:11.001828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.854 [2024-07-24 19:55:11.001842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.854 [2024-07-24 19:55:11.001855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.854 [2024-07-24 19:55:11.001885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.854 qpair failed and we were unable to recover it. 00:25:53.854 [2024-07-24 19:55:11.011683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.854 [2024-07-24 19:55:11.011782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.854 [2024-07-24 19:55:11.011807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.854 [2024-07-24 19:55:11.011822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.854 [2024-07-24 19:55:11.011835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.854 [2024-07-24 19:55:11.011863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.854 qpair failed and we were unable to recover it. 00:25:53.854 [2024-07-24 19:55:11.021706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.854 [2024-07-24 19:55:11.021831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.854 [2024-07-24 19:55:11.021857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.854 [2024-07-24 19:55:11.021878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.854 [2024-07-24 19:55:11.021893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.854 [2024-07-24 19:55:11.021921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.854 qpair failed and we were unable to recover it. 00:25:53.854 [2024-07-24 19:55:11.031738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.854 [2024-07-24 19:55:11.031847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.854 [2024-07-24 19:55:11.031872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.854 [2024-07-24 19:55:11.031887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.855 [2024-07-24 19:55:11.031900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.855 [2024-07-24 19:55:11.031928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.855 qpair failed and we were unable to recover it. 00:25:53.855 [2024-07-24 19:55:11.041806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.855 [2024-07-24 19:55:11.041925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.855 [2024-07-24 19:55:11.041951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.855 [2024-07-24 19:55:11.041966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.855 [2024-07-24 19:55:11.041979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.855 [2024-07-24 19:55:11.042007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.855 qpair failed and we were unable to recover it. 00:25:53.855 [2024-07-24 19:55:11.051829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.855 [2024-07-24 19:55:11.051935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.855 [2024-07-24 19:55:11.051960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.855 [2024-07-24 19:55:11.051975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.855 [2024-07-24 19:55:11.051988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.855 [2024-07-24 19:55:11.052015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.855 qpair failed and we were unable to recover it. 00:25:53.855 [2024-07-24 19:55:11.061846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.855 [2024-07-24 19:55:11.061967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.855 [2024-07-24 19:55:11.061992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.855 [2024-07-24 19:55:11.062006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.855 [2024-07-24 19:55:11.062019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.855 [2024-07-24 19:55:11.062046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.855 qpair failed and we were unable to recover it. 00:25:53.855 [2024-07-24 19:55:11.071885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.855 [2024-07-24 19:55:11.072001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.855 [2024-07-24 19:55:11.072030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.855 [2024-07-24 19:55:11.072045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.855 [2024-07-24 19:55:11.072058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.855 [2024-07-24 19:55:11.072087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.855 qpair failed and we were unable to recover it. 00:25:53.855 [2024-07-24 19:55:11.081878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.855 [2024-07-24 19:55:11.081987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.855 [2024-07-24 19:55:11.082013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.855 [2024-07-24 19:55:11.082027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.855 [2024-07-24 19:55:11.082040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.855 [2024-07-24 19:55:11.082068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.855 qpair failed and we were unable to recover it. 00:25:53.855 [2024-07-24 19:55:11.091943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.855 [2024-07-24 19:55:11.092046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.855 [2024-07-24 19:55:11.092071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.855 [2024-07-24 19:55:11.092085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.855 [2024-07-24 19:55:11.092098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.855 [2024-07-24 19:55:11.092127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.855 qpair failed and we were unable to recover it. 00:25:53.855 [2024-07-24 19:55:11.101942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.855 [2024-07-24 19:55:11.102043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.855 [2024-07-24 19:55:11.102068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.855 [2024-07-24 19:55:11.102083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.855 [2024-07-24 19:55:11.102096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.855 [2024-07-24 19:55:11.102124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.855 qpair failed and we were unable to recover it. 00:25:53.855 [2024-07-24 19:55:11.112014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.855 [2024-07-24 19:55:11.112126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.855 [2024-07-24 19:55:11.112150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.855 [2024-07-24 19:55:11.112169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.855 [2024-07-24 19:55:11.112182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.855 [2024-07-24 19:55:11.112210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.855 qpair failed and we were unable to recover it. 00:25:53.855 [2024-07-24 19:55:11.122035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.855 [2024-07-24 19:55:11.122161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.855 [2024-07-24 19:55:11.122187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.855 [2024-07-24 19:55:11.122201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.855 [2024-07-24 19:55:11.122214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.855 [2024-07-24 19:55:11.122249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.855 qpair failed and we were unable to recover it. 00:25:53.855 [2024-07-24 19:55:11.132034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.855 [2024-07-24 19:55:11.132177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.855 [2024-07-24 19:55:11.132202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.855 [2024-07-24 19:55:11.132216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.855 [2024-07-24 19:55:11.132229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.855 [2024-07-24 19:55:11.132276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.855 qpair failed and we were unable to recover it. 00:25:53.855 [2024-07-24 19:55:11.142070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.855 [2024-07-24 19:55:11.142180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.855 [2024-07-24 19:55:11.142204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.855 [2024-07-24 19:55:11.142218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.855 [2024-07-24 19:55:11.142231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.855 [2024-07-24 19:55:11.142266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.855 qpair failed and we were unable to recover it. 00:25:53.855 [2024-07-24 19:55:11.152115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.855 [2024-07-24 19:55:11.152238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.855 [2024-07-24 19:55:11.152271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.855 [2024-07-24 19:55:11.152286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.855 [2024-07-24 19:55:11.152299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.855 [2024-07-24 19:55:11.152327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.855 qpair failed and we were unable to recover it. 00:25:53.855 [2024-07-24 19:55:11.162130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.855 [2024-07-24 19:55:11.162249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.855 [2024-07-24 19:55:11.162274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.855 [2024-07-24 19:55:11.162289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.855 [2024-07-24 19:55:11.162302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.855 [2024-07-24 19:55:11.162332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.855 qpair failed and we were unable to recover it. 00:25:53.856 [2024-07-24 19:55:11.172189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.856 [2024-07-24 19:55:11.172304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.856 [2024-07-24 19:55:11.172329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.856 [2024-07-24 19:55:11.172344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.856 [2024-07-24 19:55:11.172356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.856 [2024-07-24 19:55:11.172385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.856 qpair failed and we were unable to recover it. 00:25:53.856 [2024-07-24 19:55:11.182181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.856 [2024-07-24 19:55:11.182294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.856 [2024-07-24 19:55:11.182319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.856 [2024-07-24 19:55:11.182333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.856 [2024-07-24 19:55:11.182346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.856 [2024-07-24 19:55:11.182375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.856 qpair failed and we were unable to recover it. 00:25:53.856 [2024-07-24 19:55:11.192295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.856 [2024-07-24 19:55:11.192443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.856 [2024-07-24 19:55:11.192468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.856 [2024-07-24 19:55:11.192482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.856 [2024-07-24 19:55:11.192495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.856 [2024-07-24 19:55:11.192525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.856 qpair failed and we were unable to recover it. 00:25:53.856 [2024-07-24 19:55:11.202238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.856 [2024-07-24 19:55:11.202346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.856 [2024-07-24 19:55:11.202377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.856 [2024-07-24 19:55:11.202392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.856 [2024-07-24 19:55:11.202405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.856 [2024-07-24 19:55:11.202434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.856 qpair failed and we were unable to recover it. 00:25:53.856 [2024-07-24 19:55:11.212292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.856 [2024-07-24 19:55:11.212429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.856 [2024-07-24 19:55:11.212455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.856 [2024-07-24 19:55:11.212469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.856 [2024-07-24 19:55:11.212482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.856 [2024-07-24 19:55:11.212511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.856 qpair failed and we were unable to recover it. 00:25:53.856 [2024-07-24 19:55:11.222345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:53.856 [2024-07-24 19:55:11.222487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:53.856 [2024-07-24 19:55:11.222512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:53.856 [2024-07-24 19:55:11.222526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:53.856 [2024-07-24 19:55:11.222539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:53.856 [2024-07-24 19:55:11.222568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:53.856 qpair failed and we were unable to recover it. 00:25:54.116 [2024-07-24 19:55:11.232334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.116 [2024-07-24 19:55:11.232440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.116 [2024-07-24 19:55:11.232465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.116 [2024-07-24 19:55:11.232479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.116 [2024-07-24 19:55:11.232492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.116 [2024-07-24 19:55:11.232520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.116 qpair failed and we were unable to recover it. 00:25:54.116 [2024-07-24 19:55:11.242360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.116 [2024-07-24 19:55:11.242513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.116 [2024-07-24 19:55:11.242538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.116 [2024-07-24 19:55:11.242553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.116 [2024-07-24 19:55:11.242566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.116 [2024-07-24 19:55:11.242594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.116 qpair failed and we were unable to recover it. 00:25:54.116 [2024-07-24 19:55:11.252426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.116 [2024-07-24 19:55:11.252553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.116 [2024-07-24 19:55:11.252578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.116 [2024-07-24 19:55:11.252593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.116 [2024-07-24 19:55:11.252606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.116 [2024-07-24 19:55:11.252634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.116 qpair failed and we were unable to recover it. 00:25:54.116 [2024-07-24 19:55:11.262458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.116 [2024-07-24 19:55:11.262602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.116 [2024-07-24 19:55:11.262627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.116 [2024-07-24 19:55:11.262642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.116 [2024-07-24 19:55:11.262655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.116 [2024-07-24 19:55:11.262683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.116 qpair failed and we were unable to recover it. 00:25:54.116 [2024-07-24 19:55:11.272499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.116 [2024-07-24 19:55:11.272653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.116 [2024-07-24 19:55:11.272677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.116 [2024-07-24 19:55:11.272691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.116 [2024-07-24 19:55:11.272704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.116 [2024-07-24 19:55:11.272732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.117 qpair failed and we were unable to recover it. 00:25:54.117 [2024-07-24 19:55:11.282496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.117 [2024-07-24 19:55:11.282621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.117 [2024-07-24 19:55:11.282646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.117 [2024-07-24 19:55:11.282660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.117 [2024-07-24 19:55:11.282674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.117 [2024-07-24 19:55:11.282702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.117 qpair failed and we were unable to recover it. 00:25:54.117 [2024-07-24 19:55:11.292536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.117 [2024-07-24 19:55:11.292649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.117 [2024-07-24 19:55:11.292679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.117 [2024-07-24 19:55:11.292694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.117 [2024-07-24 19:55:11.292707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.117 [2024-07-24 19:55:11.292735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.117 qpair failed and we were unable to recover it. 00:25:54.117 [2024-07-24 19:55:11.302582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.117 [2024-07-24 19:55:11.302686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.117 [2024-07-24 19:55:11.302712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.117 [2024-07-24 19:55:11.302726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.117 [2024-07-24 19:55:11.302739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.117 [2024-07-24 19:55:11.302768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.117 qpair failed and we were unable to recover it. 00:25:54.117 [2024-07-24 19:55:11.312563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.117 [2024-07-24 19:55:11.312670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.117 [2024-07-24 19:55:11.312695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.117 [2024-07-24 19:55:11.312709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.117 [2024-07-24 19:55:11.312722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.117 [2024-07-24 19:55:11.312750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.117 qpair failed and we were unable to recover it. 00:25:54.117 [2024-07-24 19:55:11.322568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.117 [2024-07-24 19:55:11.322674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.117 [2024-07-24 19:55:11.322700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.117 [2024-07-24 19:55:11.322716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.117 [2024-07-24 19:55:11.322730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.117 [2024-07-24 19:55:11.322759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.117 qpair failed and we were unable to recover it. 00:25:54.117 [2024-07-24 19:55:11.332642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.117 [2024-07-24 19:55:11.332762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.117 [2024-07-24 19:55:11.332787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.117 [2024-07-24 19:55:11.332802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.117 [2024-07-24 19:55:11.332815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.117 [2024-07-24 19:55:11.332849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.117 qpair failed and we were unable to recover it. 00:25:54.117 [2024-07-24 19:55:11.342678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.117 [2024-07-24 19:55:11.342801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.117 [2024-07-24 19:55:11.342827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.117 [2024-07-24 19:55:11.342841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.117 [2024-07-24 19:55:11.342854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.117 [2024-07-24 19:55:11.342882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.117 qpair failed and we were unable to recover it. 00:25:54.117 [2024-07-24 19:55:11.352717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.117 [2024-07-24 19:55:11.352842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.117 [2024-07-24 19:55:11.352866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.117 [2024-07-24 19:55:11.352881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.117 [2024-07-24 19:55:11.352894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.117 [2024-07-24 19:55:11.352921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.117 qpair failed and we were unable to recover it. 00:25:54.117 [2024-07-24 19:55:11.362696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.117 [2024-07-24 19:55:11.362807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.117 [2024-07-24 19:55:11.362833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.117 [2024-07-24 19:55:11.362848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.117 [2024-07-24 19:55:11.362861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.117 [2024-07-24 19:55:11.362889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.117 qpair failed and we were unable to recover it. 00:25:54.117 [2024-07-24 19:55:11.372777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.117 [2024-07-24 19:55:11.372884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.117 [2024-07-24 19:55:11.372909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.117 [2024-07-24 19:55:11.372924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.117 [2024-07-24 19:55:11.372937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.117 [2024-07-24 19:55:11.372964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.117 qpair failed and we were unable to recover it. 00:25:54.117 [2024-07-24 19:55:11.382827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.117 [2024-07-24 19:55:11.382976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.117 [2024-07-24 19:55:11.383007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.117 [2024-07-24 19:55:11.383022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.117 [2024-07-24 19:55:11.383036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.117 [2024-07-24 19:55:11.383064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.117 qpair failed and we were unable to recover it. 00:25:54.117 [2024-07-24 19:55:11.392865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.117 [2024-07-24 19:55:11.392975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.117 [2024-07-24 19:55:11.393000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.117 [2024-07-24 19:55:11.393014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.117 [2024-07-24 19:55:11.393028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.117 [2024-07-24 19:55:11.393056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.117 qpair failed and we were unable to recover it. 00:25:54.117 [2024-07-24 19:55:11.402833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.117 [2024-07-24 19:55:11.402945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.117 [2024-07-24 19:55:11.402970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.117 [2024-07-24 19:55:11.402985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.117 [2024-07-24 19:55:11.402997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.117 [2024-07-24 19:55:11.403026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.117 qpair failed and we were unable to recover it. 00:25:54.117 [2024-07-24 19:55:11.412904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.117 [2024-07-24 19:55:11.413054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.118 [2024-07-24 19:55:11.413080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.118 [2024-07-24 19:55:11.413094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.118 [2024-07-24 19:55:11.413107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.118 [2024-07-24 19:55:11.413135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.118 qpair failed and we were unable to recover it. 00:25:54.118 [2024-07-24 19:55:11.422861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.118 [2024-07-24 19:55:11.422961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.118 [2024-07-24 19:55:11.422986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.118 [2024-07-24 19:55:11.423001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.118 [2024-07-24 19:55:11.423013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.118 [2024-07-24 19:55:11.423047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.118 qpair failed and we were unable to recover it. 00:25:54.118 [2024-07-24 19:55:11.432936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.118 [2024-07-24 19:55:11.433047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.118 [2024-07-24 19:55:11.433071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.118 [2024-07-24 19:55:11.433086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.118 [2024-07-24 19:55:11.433099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.118 [2024-07-24 19:55:11.433126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.118 qpair failed and we were unable to recover it. 00:25:54.118 [2024-07-24 19:55:11.442909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.118 [2024-07-24 19:55:11.443014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.118 [2024-07-24 19:55:11.443040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.118 [2024-07-24 19:55:11.443054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.118 [2024-07-24 19:55:11.443067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.118 [2024-07-24 19:55:11.443095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.118 qpair failed and we were unable to recover it. 00:25:54.118 [2024-07-24 19:55:11.452956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.118 [2024-07-24 19:55:11.453054] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.118 [2024-07-24 19:55:11.453080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.118 [2024-07-24 19:55:11.453094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.118 [2024-07-24 19:55:11.453107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.118 [2024-07-24 19:55:11.453135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.118 qpair failed and we were unable to recover it. 00:25:54.118 [2024-07-24 19:55:11.462977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.118 [2024-07-24 19:55:11.463079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.118 [2024-07-24 19:55:11.463104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.118 [2024-07-24 19:55:11.463118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.118 [2024-07-24 19:55:11.463131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.118 [2024-07-24 19:55:11.463159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.118 qpair failed and we were unable to recover it. 00:25:54.118 [2024-07-24 19:55:11.473026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.118 [2024-07-24 19:55:11.473144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.118 [2024-07-24 19:55:11.473174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.118 [2024-07-24 19:55:11.473188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.118 [2024-07-24 19:55:11.473201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.118 [2024-07-24 19:55:11.473229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.118 qpair failed and we were unable to recover it. 00:25:54.118 [2024-07-24 19:55:11.483052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.118 [2024-07-24 19:55:11.483174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.118 [2024-07-24 19:55:11.483201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.118 [2024-07-24 19:55:11.483215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.118 [2024-07-24 19:55:11.483233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.118 [2024-07-24 19:55:11.483269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.118 qpair failed and we were unable to recover it. 00:25:54.118 [2024-07-24 19:55:11.493057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.118 [2024-07-24 19:55:11.493163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.118 [2024-07-24 19:55:11.493188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.118 [2024-07-24 19:55:11.493203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.118 [2024-07-24 19:55:11.493215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.118 [2024-07-24 19:55:11.493255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.118 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-24 19:55:11.503081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.377 [2024-07-24 19:55:11.503181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.377 [2024-07-24 19:55:11.503206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.377 [2024-07-24 19:55:11.503221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.377 [2024-07-24 19:55:11.503234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.377 [2024-07-24 19:55:11.503270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-24 19:55:11.513113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.377 [2024-07-24 19:55:11.513218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.377 [2024-07-24 19:55:11.513249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.377 [2024-07-24 19:55:11.513268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.377 [2024-07-24 19:55:11.513281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.377 [2024-07-24 19:55:11.513315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-24 19:55:11.523160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.377 [2024-07-24 19:55:11.523265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.377 [2024-07-24 19:55:11.523290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.377 [2024-07-24 19:55:11.523305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.377 [2024-07-24 19:55:11.523318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.377 [2024-07-24 19:55:11.523347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-24 19:55:11.533195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.377 [2024-07-24 19:55:11.533308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.377 [2024-07-24 19:55:11.533335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.377 [2024-07-24 19:55:11.533349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.377 [2024-07-24 19:55:11.533362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.377 [2024-07-24 19:55:11.533391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-24 19:55:11.543210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.377 [2024-07-24 19:55:11.543317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.377 [2024-07-24 19:55:11.543343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.377 [2024-07-24 19:55:11.543358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.377 [2024-07-24 19:55:11.543370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.377 [2024-07-24 19:55:11.543398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-24 19:55:11.553273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.377 [2024-07-24 19:55:11.553399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.377 [2024-07-24 19:55:11.553426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.377 [2024-07-24 19:55:11.553441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.377 [2024-07-24 19:55:11.553458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.377 [2024-07-24 19:55:11.553488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-24 19:55:11.563282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.377 [2024-07-24 19:55:11.563393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.377 [2024-07-24 19:55:11.563425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.377 [2024-07-24 19:55:11.563441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.377 [2024-07-24 19:55:11.563454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.377 [2024-07-24 19:55:11.563483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.377 qpair failed and we were unable to recover it. 00:25:54.377 [2024-07-24 19:55:11.573353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.378 [2024-07-24 19:55:11.573463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.378 [2024-07-24 19:55:11.573489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.378 [2024-07-24 19:55:11.573506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.378 [2024-07-24 19:55:11.573519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.378 [2024-07-24 19:55:11.573548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-24 19:55:11.583342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.378 [2024-07-24 19:55:11.583455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.378 [2024-07-24 19:55:11.583481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.378 [2024-07-24 19:55:11.583496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.378 [2024-07-24 19:55:11.583510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.378 [2024-07-24 19:55:11.583540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-24 19:55:11.593381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.378 [2024-07-24 19:55:11.593498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.378 [2024-07-24 19:55:11.593524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.378 [2024-07-24 19:55:11.593539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.378 [2024-07-24 19:55:11.593553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.378 [2024-07-24 19:55:11.593581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-24 19:55:11.603424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.378 [2024-07-24 19:55:11.603577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.378 [2024-07-24 19:55:11.603602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.378 [2024-07-24 19:55:11.603617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.378 [2024-07-24 19:55:11.603635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.378 [2024-07-24 19:55:11.603665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-24 19:55:11.613466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.378 [2024-07-24 19:55:11.613609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.378 [2024-07-24 19:55:11.613635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.378 [2024-07-24 19:55:11.613649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.378 [2024-07-24 19:55:11.613662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.378 [2024-07-24 19:55:11.613693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-24 19:55:11.623455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.378 [2024-07-24 19:55:11.623571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.378 [2024-07-24 19:55:11.623596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.378 [2024-07-24 19:55:11.623610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.378 [2024-07-24 19:55:11.623623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.378 [2024-07-24 19:55:11.623652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-24 19:55:11.633510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.378 [2024-07-24 19:55:11.633619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.378 [2024-07-24 19:55:11.633644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.378 [2024-07-24 19:55:11.633659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.378 [2024-07-24 19:55:11.633672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.378 [2024-07-24 19:55:11.633700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-24 19:55:11.643497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.378 [2024-07-24 19:55:11.643611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.378 [2024-07-24 19:55:11.643637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.378 [2024-07-24 19:55:11.643652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.378 [2024-07-24 19:55:11.643666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.378 [2024-07-24 19:55:11.643694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-24 19:55:11.653548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.378 [2024-07-24 19:55:11.653658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.378 [2024-07-24 19:55:11.653684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.378 [2024-07-24 19:55:11.653699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.378 [2024-07-24 19:55:11.653712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.378 [2024-07-24 19:55:11.653741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-24 19:55:11.663589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.378 [2024-07-24 19:55:11.663694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.378 [2024-07-24 19:55:11.663719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.378 [2024-07-24 19:55:11.663734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.378 [2024-07-24 19:55:11.663747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.378 [2024-07-24 19:55:11.663777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-24 19:55:11.673633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.378 [2024-07-24 19:55:11.673779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.378 [2024-07-24 19:55:11.673805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.378 [2024-07-24 19:55:11.673820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.378 [2024-07-24 19:55:11.673833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.378 [2024-07-24 19:55:11.673862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-24 19:55:11.683609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.378 [2024-07-24 19:55:11.683730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.378 [2024-07-24 19:55:11.683756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.378 [2024-07-24 19:55:11.683770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.378 [2024-07-24 19:55:11.683783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.378 [2024-07-24 19:55:11.683812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.378 qpair failed and we were unable to recover it. 00:25:54.378 [2024-07-24 19:55:11.693645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.379 [2024-07-24 19:55:11.693753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.379 [2024-07-24 19:55:11.693779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.379 [2024-07-24 19:55:11.693794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.379 [2024-07-24 19:55:11.693816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.379 [2024-07-24 19:55:11.693845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-24 19:55:11.703673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.379 [2024-07-24 19:55:11.703779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.379 [2024-07-24 19:55:11.703804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.379 [2024-07-24 19:55:11.703819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.379 [2024-07-24 19:55:11.703832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.379 [2024-07-24 19:55:11.703860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-24 19:55:11.713714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.379 [2024-07-24 19:55:11.713829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.379 [2024-07-24 19:55:11.713854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.379 [2024-07-24 19:55:11.713868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.379 [2024-07-24 19:55:11.713881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.379 [2024-07-24 19:55:11.713909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-24 19:55:11.723709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.379 [2024-07-24 19:55:11.723828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.379 [2024-07-24 19:55:11.723854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.379 [2024-07-24 19:55:11.723869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.379 [2024-07-24 19:55:11.723881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.379 [2024-07-24 19:55:11.723910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-24 19:55:11.733765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.379 [2024-07-24 19:55:11.733876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.379 [2024-07-24 19:55:11.733901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.379 [2024-07-24 19:55:11.733915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.379 [2024-07-24 19:55:11.733928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.379 [2024-07-24 19:55:11.733956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-24 19:55:11.743836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.379 [2024-07-24 19:55:11.743951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.379 [2024-07-24 19:55:11.743976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.379 [2024-07-24 19:55:11.743991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.379 [2024-07-24 19:55:11.744004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.379 [2024-07-24 19:55:11.744032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.379 [2024-07-24 19:55:11.753800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.379 [2024-07-24 19:55:11.753908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.379 [2024-07-24 19:55:11.753932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.379 [2024-07-24 19:55:11.753947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.379 [2024-07-24 19:55:11.753960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.379 [2024-07-24 19:55:11.753988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.379 qpair failed and we were unable to recover it. 00:25:54.638 [2024-07-24 19:55:11.763841] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.638 [2024-07-24 19:55:11.763948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.638 [2024-07-24 19:55:11.763973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.638 [2024-07-24 19:55:11.763987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.638 [2024-07-24 19:55:11.764000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.638 [2024-07-24 19:55:11.764028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.638 qpair failed and we were unable to recover it. 00:25:54.638 [2024-07-24 19:55:11.773902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.638 [2024-07-24 19:55:11.774003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.638 [2024-07-24 19:55:11.774029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.638 [2024-07-24 19:55:11.774043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.638 [2024-07-24 19:55:11.774056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.638 [2024-07-24 19:55:11.774084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.638 qpair failed and we were unable to recover it. 00:25:54.638 [2024-07-24 19:55:11.783911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.638 [2024-07-24 19:55:11.784023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.638 [2024-07-24 19:55:11.784048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.638 [2024-07-24 19:55:11.784062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.638 [2024-07-24 19:55:11.784081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.638 [2024-07-24 19:55:11.784110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.638 qpair failed and we were unable to recover it. 00:25:54.638 [2024-07-24 19:55:11.793916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.638 [2024-07-24 19:55:11.794024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.638 [2024-07-24 19:55:11.794049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.638 [2024-07-24 19:55:11.794064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.638 [2024-07-24 19:55:11.794077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.638 [2024-07-24 19:55:11.794105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.638 qpair failed and we were unable to recover it. 00:25:54.638 [2024-07-24 19:55:11.803941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.638 [2024-07-24 19:55:11.804047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.638 [2024-07-24 19:55:11.804072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.638 [2024-07-24 19:55:11.804087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.638 [2024-07-24 19:55:11.804100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.638 [2024-07-24 19:55:11.804128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.638 qpair failed and we were unable to recover it. 00:25:54.638 [2024-07-24 19:55:11.813971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.638 [2024-07-24 19:55:11.814074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.638 [2024-07-24 19:55:11.814100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.638 [2024-07-24 19:55:11.814114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.638 [2024-07-24 19:55:11.814127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.638 [2024-07-24 19:55:11.814155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.638 qpair failed and we were unable to recover it. 00:25:54.638 [2024-07-24 19:55:11.823989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.638 [2024-07-24 19:55:11.824095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.638 [2024-07-24 19:55:11.824119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.638 [2024-07-24 19:55:11.824134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.638 [2024-07-24 19:55:11.824146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.638 [2024-07-24 19:55:11.824174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.638 qpair failed and we were unable to recover it. 00:25:54.638 [2024-07-24 19:55:11.834046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.638 [2024-07-24 19:55:11.834155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.638 [2024-07-24 19:55:11.834180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.638 [2024-07-24 19:55:11.834194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.638 [2024-07-24 19:55:11.834206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.638 [2024-07-24 19:55:11.834235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.638 qpair failed and we were unable to recover it. 00:25:54.638 [2024-07-24 19:55:11.844109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.638 [2024-07-24 19:55:11.844277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.638 [2024-07-24 19:55:11.844302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.638 [2024-07-24 19:55:11.844317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.638 [2024-07-24 19:55:11.844330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.638 [2024-07-24 19:55:11.844359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.638 qpair failed and we were unable to recover it. 00:25:54.638 [2024-07-24 19:55:11.854074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.639 [2024-07-24 19:55:11.854178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.639 [2024-07-24 19:55:11.854204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.639 [2024-07-24 19:55:11.854218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.639 [2024-07-24 19:55:11.854231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.639 [2024-07-24 19:55:11.854266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.639 qpair failed and we were unable to recover it. 00:25:54.639 [2024-07-24 19:55:11.864101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.639 [2024-07-24 19:55:11.864206] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.639 [2024-07-24 19:55:11.864231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.639 [2024-07-24 19:55:11.864256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.639 [2024-07-24 19:55:11.864271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.639 [2024-07-24 19:55:11.864300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.639 qpair failed and we were unable to recover it. 00:25:54.639 [2024-07-24 19:55:11.874179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.639 [2024-07-24 19:55:11.874305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.639 [2024-07-24 19:55:11.874330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.639 [2024-07-24 19:55:11.874351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.639 [2024-07-24 19:55:11.874364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.639 [2024-07-24 19:55:11.874393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.639 qpair failed and we were unable to recover it. 00:25:54.639 [2024-07-24 19:55:11.884225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.639 [2024-07-24 19:55:11.884351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.639 [2024-07-24 19:55:11.884377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.639 [2024-07-24 19:55:11.884392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.639 [2024-07-24 19:55:11.884405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.639 [2024-07-24 19:55:11.884434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.639 qpair failed and we were unable to recover it. 00:25:54.639 [2024-07-24 19:55:11.894180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.639 [2024-07-24 19:55:11.894288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.639 [2024-07-24 19:55:11.894314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.639 [2024-07-24 19:55:11.894328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.639 [2024-07-24 19:55:11.894341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.639 [2024-07-24 19:55:11.894370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.639 qpair failed and we were unable to recover it. 00:25:54.639 [2024-07-24 19:55:11.904214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.639 [2024-07-24 19:55:11.904319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.639 [2024-07-24 19:55:11.904345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.639 [2024-07-24 19:55:11.904359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.639 [2024-07-24 19:55:11.904372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.639 [2024-07-24 19:55:11.904401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.639 qpair failed and we were unable to recover it. 00:25:54.639 [2024-07-24 19:55:11.914307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.639 [2024-07-24 19:55:11.914415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.639 [2024-07-24 19:55:11.914440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.639 [2024-07-24 19:55:11.914454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.639 [2024-07-24 19:55:11.914467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.639 [2024-07-24 19:55:11.914495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.639 qpair failed and we were unable to recover it. 00:25:54.639 [2024-07-24 19:55:11.924304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.639 [2024-07-24 19:55:11.924412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.639 [2024-07-24 19:55:11.924437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.639 [2024-07-24 19:55:11.924451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.639 [2024-07-24 19:55:11.924464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.639 [2024-07-24 19:55:11.924492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.639 qpair failed and we were unable to recover it. 00:25:54.639 [2024-07-24 19:55:11.934334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.639 [2024-07-24 19:55:11.934450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.639 [2024-07-24 19:55:11.934475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.639 [2024-07-24 19:55:11.934489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.639 [2024-07-24 19:55:11.934502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.639 [2024-07-24 19:55:11.934530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.639 qpair failed and we were unable to recover it. 00:25:54.639 [2024-07-24 19:55:11.944350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.639 [2024-07-24 19:55:11.944470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.639 [2024-07-24 19:55:11.944496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.639 [2024-07-24 19:55:11.944511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.639 [2024-07-24 19:55:11.944527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.639 [2024-07-24 19:55:11.944558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.639 qpair failed and we were unable to recover it. 00:25:54.639 [2024-07-24 19:55:11.954393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.639 [2024-07-24 19:55:11.954520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.639 [2024-07-24 19:55:11.954545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.639 [2024-07-24 19:55:11.954559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.639 [2024-07-24 19:55:11.954571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.639 [2024-07-24 19:55:11.954600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.639 qpair failed and we were unable to recover it. 00:25:54.639 [2024-07-24 19:55:11.964433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.639 [2024-07-24 19:55:11.964535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.639 [2024-07-24 19:55:11.964560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.639 [2024-07-24 19:55:11.964581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.639 [2024-07-24 19:55:11.964594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.639 [2024-07-24 19:55:11.964624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.639 qpair failed and we were unable to recover it. 00:25:54.639 [2024-07-24 19:55:11.974439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.639 [2024-07-24 19:55:11.974549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.639 [2024-07-24 19:55:11.974575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.639 [2024-07-24 19:55:11.974589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.639 [2024-07-24 19:55:11.974602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.639 [2024-07-24 19:55:11.974630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.639 qpair failed and we were unable to recover it. 00:25:54.639 [2024-07-24 19:55:11.984530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.639 [2024-07-24 19:55:11.984669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.639 [2024-07-24 19:55:11.984694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.640 [2024-07-24 19:55:11.984709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.640 [2024-07-24 19:55:11.984722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.640 [2024-07-24 19:55:11.984750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.640 qpair failed and we were unable to recover it. 00:25:54.640 [2024-07-24 19:55:11.994516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.640 [2024-07-24 19:55:11.994623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.640 [2024-07-24 19:55:11.994648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.640 [2024-07-24 19:55:11.994663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.640 [2024-07-24 19:55:11.994676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.640 [2024-07-24 19:55:11.994706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.640 qpair failed and we were unable to recover it. 00:25:54.640 [2024-07-24 19:55:12.004527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.640 [2024-07-24 19:55:12.004634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.640 [2024-07-24 19:55:12.004659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.640 [2024-07-24 19:55:12.004674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.640 [2024-07-24 19:55:12.004687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.640 [2024-07-24 19:55:12.004716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.640 qpair failed and we were unable to recover it. 00:25:54.640 [2024-07-24 19:55:12.014558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.640 [2024-07-24 19:55:12.014667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.640 [2024-07-24 19:55:12.014692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.640 [2024-07-24 19:55:12.014707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.640 [2024-07-24 19:55:12.014719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.640 [2024-07-24 19:55:12.014747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.640 qpair failed and we were unable to recover it. 00:25:54.899 [2024-07-24 19:55:12.024622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.899 [2024-07-24 19:55:12.024739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.899 [2024-07-24 19:55:12.024764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.899 [2024-07-24 19:55:12.024778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.899 [2024-07-24 19:55:12.024791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.899 [2024-07-24 19:55:12.024818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.899 qpair failed and we were unable to recover it. 00:25:54.899 [2024-07-24 19:55:12.034615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.899 [2024-07-24 19:55:12.034728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.899 [2024-07-24 19:55:12.034753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.899 [2024-07-24 19:55:12.034767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.899 [2024-07-24 19:55:12.034780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.899 [2024-07-24 19:55:12.034808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.899 qpair failed and we were unable to recover it. 00:25:54.899 [2024-07-24 19:55:12.044631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.899 [2024-07-24 19:55:12.044737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.899 [2024-07-24 19:55:12.044762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.899 [2024-07-24 19:55:12.044777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.899 [2024-07-24 19:55:12.044789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.899 [2024-07-24 19:55:12.044818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.899 qpair failed and we were unable to recover it. 00:25:54.899 [2024-07-24 19:55:12.054674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.900 [2024-07-24 19:55:12.054782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.900 [2024-07-24 19:55:12.054807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.900 [2024-07-24 19:55:12.054828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.900 [2024-07-24 19:55:12.054842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.900 [2024-07-24 19:55:12.054873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.900 qpair failed and we were unable to recover it. 00:25:54.900 [2024-07-24 19:55:12.064669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.900 [2024-07-24 19:55:12.064766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.900 [2024-07-24 19:55:12.064791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.900 [2024-07-24 19:55:12.064805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.900 [2024-07-24 19:55:12.064816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.900 [2024-07-24 19:55:12.064845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.900 qpair failed and we were unable to recover it. 00:25:54.900 [2024-07-24 19:55:12.074735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.900 [2024-07-24 19:55:12.074840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.900 [2024-07-24 19:55:12.074865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.900 [2024-07-24 19:55:12.074880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.900 [2024-07-24 19:55:12.074893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.900 [2024-07-24 19:55:12.074921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.900 qpair failed and we were unable to recover it. 00:25:54.900 [2024-07-24 19:55:12.084758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.900 [2024-07-24 19:55:12.084866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.900 [2024-07-24 19:55:12.084892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.900 [2024-07-24 19:55:12.084907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.900 [2024-07-24 19:55:12.084920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.900 [2024-07-24 19:55:12.084948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.900 qpair failed and we were unable to recover it. 00:25:54.900 [2024-07-24 19:55:12.094785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.900 [2024-07-24 19:55:12.094891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.900 [2024-07-24 19:55:12.094916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.900 [2024-07-24 19:55:12.094930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.900 [2024-07-24 19:55:12.094943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.900 [2024-07-24 19:55:12.094972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.900 qpair failed and we were unable to recover it. 00:25:54.900 [2024-07-24 19:55:12.104799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.900 [2024-07-24 19:55:12.104904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.900 [2024-07-24 19:55:12.104929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.900 [2024-07-24 19:55:12.104944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.900 [2024-07-24 19:55:12.104956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.900 [2024-07-24 19:55:12.104984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.900 qpair failed and we were unable to recover it. 00:25:54.900 [2024-07-24 19:55:12.114843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.900 [2024-07-24 19:55:12.114953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.900 [2024-07-24 19:55:12.114976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.900 [2024-07-24 19:55:12.114990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.900 [2024-07-24 19:55:12.115002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.900 [2024-07-24 19:55:12.115029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.900 qpair failed and we were unable to recover it. 00:25:54.900 [2024-07-24 19:55:12.124863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.900 [2024-07-24 19:55:12.124972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.900 [2024-07-24 19:55:12.124997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.900 [2024-07-24 19:55:12.125012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.900 [2024-07-24 19:55:12.125025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.900 [2024-07-24 19:55:12.125053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.900 qpair failed and we were unable to recover it. 00:25:54.900 [2024-07-24 19:55:12.134911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.900 [2024-07-24 19:55:12.135015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.900 [2024-07-24 19:55:12.135041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.900 [2024-07-24 19:55:12.135055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.900 [2024-07-24 19:55:12.135068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.900 [2024-07-24 19:55:12.135096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.900 qpair failed and we were unable to recover it. 00:25:54.900 [2024-07-24 19:55:12.144926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.900 [2024-07-24 19:55:12.145043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.900 [2024-07-24 19:55:12.145073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.900 [2024-07-24 19:55:12.145088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.900 [2024-07-24 19:55:12.145101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.900 [2024-07-24 19:55:12.145130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.900 qpair failed and we were unable to recover it. 00:25:54.900 [2024-07-24 19:55:12.154970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.900 [2024-07-24 19:55:12.155079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.900 [2024-07-24 19:55:12.155104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.900 [2024-07-24 19:55:12.155118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.900 [2024-07-24 19:55:12.155131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.900 [2024-07-24 19:55:12.155158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.900 qpair failed and we were unable to recover it. 00:25:54.900 [2024-07-24 19:55:12.164973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.900 [2024-07-24 19:55:12.165076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.900 [2024-07-24 19:55:12.165101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.900 [2024-07-24 19:55:12.165115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.900 [2024-07-24 19:55:12.165128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.900 [2024-07-24 19:55:12.165157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.900 qpair failed and we were unable to recover it. 00:25:54.900 [2024-07-24 19:55:12.175025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.900 [2024-07-24 19:55:12.175133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.900 [2024-07-24 19:55:12.175157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.900 [2024-07-24 19:55:12.175171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.900 [2024-07-24 19:55:12.175184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.900 [2024-07-24 19:55:12.175212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.900 qpair failed and we were unable to recover it. 00:25:54.900 [2024-07-24 19:55:12.185041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.901 [2024-07-24 19:55:12.185144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.901 [2024-07-24 19:55:12.185169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.901 [2024-07-24 19:55:12.185184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.901 [2024-07-24 19:55:12.185197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.901 [2024-07-24 19:55:12.185225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.901 qpair failed and we were unable to recover it. 00:25:54.901 [2024-07-24 19:55:12.195083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.901 [2024-07-24 19:55:12.195197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.901 [2024-07-24 19:55:12.195223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.901 [2024-07-24 19:55:12.195237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.901 [2024-07-24 19:55:12.195258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.901 [2024-07-24 19:55:12.195287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.901 qpair failed and we were unable to recover it. 00:25:54.901 [2024-07-24 19:55:12.205091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.901 [2024-07-24 19:55:12.205199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.901 [2024-07-24 19:55:12.205225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.901 [2024-07-24 19:55:12.205239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.901 [2024-07-24 19:55:12.205261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.901 [2024-07-24 19:55:12.205290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.901 qpair failed and we were unable to recover it. 00:25:54.901 [2024-07-24 19:55:12.215128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.901 [2024-07-24 19:55:12.215230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.901 [2024-07-24 19:55:12.215262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.901 [2024-07-24 19:55:12.215277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.901 [2024-07-24 19:55:12.215290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.901 [2024-07-24 19:55:12.215318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.901 qpair failed and we were unable to recover it. 00:25:54.901 [2024-07-24 19:55:12.225155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.901 [2024-07-24 19:55:12.225268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.901 [2024-07-24 19:55:12.225294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.901 [2024-07-24 19:55:12.225308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.901 [2024-07-24 19:55:12.225321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.901 [2024-07-24 19:55:12.225349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.901 qpair failed and we were unable to recover it. 00:25:54.901 [2024-07-24 19:55:12.235193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.901 [2024-07-24 19:55:12.235309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.901 [2024-07-24 19:55:12.235340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.901 [2024-07-24 19:55:12.235355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.901 [2024-07-24 19:55:12.235367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.901 [2024-07-24 19:55:12.235396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.901 qpair failed and we were unable to recover it. 00:25:54.901 [2024-07-24 19:55:12.245214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.901 [2024-07-24 19:55:12.245330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.901 [2024-07-24 19:55:12.245356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.901 [2024-07-24 19:55:12.245371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.901 [2024-07-24 19:55:12.245383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.901 [2024-07-24 19:55:12.245411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.901 qpair failed and we were unable to recover it. 00:25:54.901 [2024-07-24 19:55:12.255281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.901 [2024-07-24 19:55:12.255425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.901 [2024-07-24 19:55:12.255450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.901 [2024-07-24 19:55:12.255464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.901 [2024-07-24 19:55:12.255477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.901 [2024-07-24 19:55:12.255505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.901 qpair failed and we were unable to recover it. 00:25:54.901 [2024-07-24 19:55:12.265258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.901 [2024-07-24 19:55:12.265384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.901 [2024-07-24 19:55:12.265409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.901 [2024-07-24 19:55:12.265423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.901 [2024-07-24 19:55:12.265436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.901 [2024-07-24 19:55:12.265464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.901 qpair failed and we were unable to recover it. 00:25:54.901 [2024-07-24 19:55:12.275337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:54.901 [2024-07-24 19:55:12.275444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:54.901 [2024-07-24 19:55:12.275469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:54.901 [2024-07-24 19:55:12.275484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:54.901 [2024-07-24 19:55:12.275497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:54.901 [2024-07-24 19:55:12.275531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:54.901 qpair failed and we were unable to recover it. 00:25:55.160 [2024-07-24 19:55:12.285374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.160 [2024-07-24 19:55:12.285482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.160 [2024-07-24 19:55:12.285507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.160 [2024-07-24 19:55:12.285523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.160 [2024-07-24 19:55:12.285535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.160 [2024-07-24 19:55:12.285565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.160 qpair failed and we were unable to recover it. 00:25:55.160 [2024-07-24 19:55:12.295351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.160 [2024-07-24 19:55:12.295455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.160 [2024-07-24 19:55:12.295480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.160 [2024-07-24 19:55:12.295494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.160 [2024-07-24 19:55:12.295507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.160 [2024-07-24 19:55:12.295535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.160 qpair failed and we were unable to recover it. 00:25:55.160 [2024-07-24 19:55:12.305390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.160 [2024-07-24 19:55:12.305498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.160 [2024-07-24 19:55:12.305524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.160 [2024-07-24 19:55:12.305539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.160 [2024-07-24 19:55:12.305552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.160 [2024-07-24 19:55:12.305580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.160 qpair failed and we were unable to recover it. 00:25:55.160 [2024-07-24 19:55:12.315403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.160 [2024-07-24 19:55:12.315506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.160 [2024-07-24 19:55:12.315531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.160 [2024-07-24 19:55:12.315545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.160 [2024-07-24 19:55:12.315558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.160 [2024-07-24 19:55:12.315586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.160 qpair failed and we were unable to recover it. 00:25:55.160 [2024-07-24 19:55:12.325424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.160 [2024-07-24 19:55:12.325528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.160 [2024-07-24 19:55:12.325559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.160 [2024-07-24 19:55:12.325574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.160 [2024-07-24 19:55:12.325587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.160 [2024-07-24 19:55:12.325616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.160 qpair failed and we were unable to recover it. 00:25:55.160 [2024-07-24 19:55:12.335450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.160 [2024-07-24 19:55:12.335566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.160 [2024-07-24 19:55:12.335592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.160 [2024-07-24 19:55:12.335606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.160 [2024-07-24 19:55:12.335619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.160 [2024-07-24 19:55:12.335647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.160 qpair failed and we were unable to recover it. 00:25:55.160 [2024-07-24 19:55:12.345546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.160 [2024-07-24 19:55:12.345651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.161 [2024-07-24 19:55:12.345676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.161 [2024-07-24 19:55:12.345690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.161 [2024-07-24 19:55:12.345703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.161 [2024-07-24 19:55:12.345731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.161 qpair failed and we were unable to recover it. 00:25:55.161 [2024-07-24 19:55:12.355529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.161 [2024-07-24 19:55:12.355637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.161 [2024-07-24 19:55:12.355662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.161 [2024-07-24 19:55:12.355676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.161 [2024-07-24 19:55:12.355689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.161 [2024-07-24 19:55:12.355717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.161 qpair failed and we were unable to recover it. 00:25:55.161 [2024-07-24 19:55:12.365581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.161 [2024-07-24 19:55:12.365719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.161 [2024-07-24 19:55:12.365744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.161 [2024-07-24 19:55:12.365760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.161 [2024-07-24 19:55:12.365773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.161 [2024-07-24 19:55:12.365807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.161 qpair failed and we were unable to recover it. 00:25:55.161 [2024-07-24 19:55:12.375578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.161 [2024-07-24 19:55:12.375676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.161 [2024-07-24 19:55:12.375702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.161 [2024-07-24 19:55:12.375716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.161 [2024-07-24 19:55:12.375729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.161 [2024-07-24 19:55:12.375757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.161 qpair failed and we were unable to recover it. 00:25:55.161 [2024-07-24 19:55:12.385590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.161 [2024-07-24 19:55:12.385690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.161 [2024-07-24 19:55:12.385715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.161 [2024-07-24 19:55:12.385730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.161 [2024-07-24 19:55:12.385743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.161 [2024-07-24 19:55:12.385771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.161 qpair failed and we were unable to recover it. 00:25:55.161 [2024-07-24 19:55:12.395673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.161 [2024-07-24 19:55:12.395782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.161 [2024-07-24 19:55:12.395808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.161 [2024-07-24 19:55:12.395822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.161 [2024-07-24 19:55:12.395835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.161 [2024-07-24 19:55:12.395864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.161 qpair failed and we were unable to recover it. 00:25:55.161 [2024-07-24 19:55:12.405646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.161 [2024-07-24 19:55:12.405744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.161 [2024-07-24 19:55:12.405770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.161 [2024-07-24 19:55:12.405784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.161 [2024-07-24 19:55:12.405797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.161 [2024-07-24 19:55:12.405825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.161 qpair failed and we were unable to recover it. 00:25:55.161 [2024-07-24 19:55:12.415668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.161 [2024-07-24 19:55:12.415764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.161 [2024-07-24 19:55:12.415798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.161 [2024-07-24 19:55:12.415813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.161 [2024-07-24 19:55:12.415826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.161 [2024-07-24 19:55:12.415854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.161 qpair failed and we were unable to recover it. 00:25:55.161 [2024-07-24 19:55:12.425738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.161 [2024-07-24 19:55:12.425839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.161 [2024-07-24 19:55:12.425864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.161 [2024-07-24 19:55:12.425878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.161 [2024-07-24 19:55:12.425891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.161 [2024-07-24 19:55:12.425919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.161 qpair failed and we were unable to recover it. 00:25:55.161 [2024-07-24 19:55:12.435740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.161 [2024-07-24 19:55:12.435845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.161 [2024-07-24 19:55:12.435870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.161 [2024-07-24 19:55:12.435884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.161 [2024-07-24 19:55:12.435897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.161 [2024-07-24 19:55:12.435925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.161 qpair failed and we were unable to recover it. 00:25:55.161 [2024-07-24 19:55:12.445797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.161 [2024-07-24 19:55:12.445909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.161 [2024-07-24 19:55:12.445935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.161 [2024-07-24 19:55:12.445949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.161 [2024-07-24 19:55:12.445962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.161 [2024-07-24 19:55:12.445991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.161 qpair failed and we were unable to recover it. 00:25:55.161 [2024-07-24 19:55:12.455830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.161 [2024-07-24 19:55:12.455939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.161 [2024-07-24 19:55:12.455964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.161 [2024-07-24 19:55:12.455979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.161 [2024-07-24 19:55:12.455992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.161 [2024-07-24 19:55:12.456026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.161 qpair failed and we were unable to recover it. 00:25:55.161 [2024-07-24 19:55:12.465834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.161 [2024-07-24 19:55:12.465932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.161 [2024-07-24 19:55:12.465958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.161 [2024-07-24 19:55:12.465972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.161 [2024-07-24 19:55:12.465985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.161 [2024-07-24 19:55:12.466012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.161 qpair failed and we were unable to recover it. 00:25:55.161 [2024-07-24 19:55:12.475856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.161 [2024-07-24 19:55:12.475965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.161 [2024-07-24 19:55:12.475990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.161 [2024-07-24 19:55:12.476004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.162 [2024-07-24 19:55:12.476017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.162 [2024-07-24 19:55:12.476047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.162 qpair failed and we were unable to recover it. 00:25:55.162 [2024-07-24 19:55:12.485886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.162 [2024-07-24 19:55:12.485992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.162 [2024-07-24 19:55:12.486017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.162 [2024-07-24 19:55:12.486031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.162 [2024-07-24 19:55:12.486044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.162 [2024-07-24 19:55:12.486072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.162 qpair failed and we were unable to recover it. 00:25:55.162 [2024-07-24 19:55:12.495930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.162 [2024-07-24 19:55:12.496031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.162 [2024-07-24 19:55:12.496056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.162 [2024-07-24 19:55:12.496071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.162 [2024-07-24 19:55:12.496083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.162 [2024-07-24 19:55:12.496111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.162 qpair failed and we were unable to recover it. 00:25:55.162 [2024-07-24 19:55:12.505936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.162 [2024-07-24 19:55:12.506064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.162 [2024-07-24 19:55:12.506095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.162 [2024-07-24 19:55:12.506110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.162 [2024-07-24 19:55:12.506123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.162 [2024-07-24 19:55:12.506151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.162 qpair failed and we were unable to recover it. 00:25:55.162 [2024-07-24 19:55:12.515986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.162 [2024-07-24 19:55:12.516107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.162 [2024-07-24 19:55:12.516133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.162 [2024-07-24 19:55:12.516149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.162 [2024-07-24 19:55:12.516165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.162 [2024-07-24 19:55:12.516195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.162 qpair failed and we were unable to recover it. 00:25:55.162 [2024-07-24 19:55:12.525988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.162 [2024-07-24 19:55:12.526109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.162 [2024-07-24 19:55:12.526135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.162 [2024-07-24 19:55:12.526149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.162 [2024-07-24 19:55:12.526162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.162 [2024-07-24 19:55:12.526191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.162 qpair failed and we were unable to recover it. 00:25:55.162 [2024-07-24 19:55:12.536011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.162 [2024-07-24 19:55:12.536108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.162 [2024-07-24 19:55:12.536133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.162 [2024-07-24 19:55:12.536147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.162 [2024-07-24 19:55:12.536160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.162 [2024-07-24 19:55:12.536188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.162 qpair failed and we were unable to recover it. 00:25:55.421 [2024-07-24 19:55:12.546035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.421 [2024-07-24 19:55:12.546190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.421 [2024-07-24 19:55:12.546216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.421 [2024-07-24 19:55:12.546230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.421 [2024-07-24 19:55:12.546255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.421 [2024-07-24 19:55:12.546285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.421 qpair failed and we were unable to recover it. 00:25:55.421 [2024-07-24 19:55:12.556127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.421 [2024-07-24 19:55:12.556239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.421 [2024-07-24 19:55:12.556270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.421 [2024-07-24 19:55:12.556284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.421 [2024-07-24 19:55:12.556298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.421 [2024-07-24 19:55:12.556326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.421 qpair failed and we were unable to recover it. 00:25:55.421 [2024-07-24 19:55:12.566156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.421 [2024-07-24 19:55:12.566293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.421 [2024-07-24 19:55:12.566319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.421 [2024-07-24 19:55:12.566334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.421 [2024-07-24 19:55:12.566347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.421 [2024-07-24 19:55:12.566377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.421 qpair failed and we were unable to recover it. 00:25:55.421 [2024-07-24 19:55:12.576118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.421 [2024-07-24 19:55:12.576220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.421 [2024-07-24 19:55:12.576252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.422 [2024-07-24 19:55:12.576271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.422 [2024-07-24 19:55:12.576285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.422 [2024-07-24 19:55:12.576313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.422 qpair failed and we were unable to recover it. 00:25:55.422 [2024-07-24 19:55:12.586147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.422 [2024-07-24 19:55:12.586258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.422 [2024-07-24 19:55:12.586283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.422 [2024-07-24 19:55:12.586298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.422 [2024-07-24 19:55:12.586310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.422 [2024-07-24 19:55:12.586339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.422 qpair failed and we were unable to recover it. 00:25:55.422 [2024-07-24 19:55:12.596224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.422 [2024-07-24 19:55:12.596351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.422 [2024-07-24 19:55:12.596376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.422 [2024-07-24 19:55:12.596391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.422 [2024-07-24 19:55:12.596404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.422 [2024-07-24 19:55:12.596432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.422 qpair failed and we were unable to recover it. 00:25:55.422 [2024-07-24 19:55:12.606235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.422 [2024-07-24 19:55:12.606365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.422 [2024-07-24 19:55:12.606390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.422 [2024-07-24 19:55:12.606405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.422 [2024-07-24 19:55:12.606418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.422 [2024-07-24 19:55:12.606446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.422 qpair failed and we were unable to recover it. 00:25:55.422 [2024-07-24 19:55:12.616280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.422 [2024-07-24 19:55:12.616406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.422 [2024-07-24 19:55:12.616431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.422 [2024-07-24 19:55:12.616445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.422 [2024-07-24 19:55:12.616457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.422 [2024-07-24 19:55:12.616486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.422 qpair failed and we were unable to recover it. 00:25:55.422 [2024-07-24 19:55:12.626288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.422 [2024-07-24 19:55:12.626408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.422 [2024-07-24 19:55:12.626433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.422 [2024-07-24 19:55:12.626448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.422 [2024-07-24 19:55:12.626460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.422 [2024-07-24 19:55:12.626488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.422 qpair failed and we were unable to recover it. 00:25:55.422 [2024-07-24 19:55:12.636330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.422 [2024-07-24 19:55:12.636477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.422 [2024-07-24 19:55:12.636502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.422 [2024-07-24 19:55:12.636516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.422 [2024-07-24 19:55:12.636534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.422 [2024-07-24 19:55:12.636563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.422 qpair failed and we were unable to recover it. 00:25:55.422 [2024-07-24 19:55:12.646328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.422 [2024-07-24 19:55:12.646431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.422 [2024-07-24 19:55:12.646456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.422 [2024-07-24 19:55:12.646471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.422 [2024-07-24 19:55:12.646484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.422 [2024-07-24 19:55:12.646512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.422 qpair failed and we were unable to recover it. 00:25:55.422 [2024-07-24 19:55:12.656404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.422 [2024-07-24 19:55:12.656524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.422 [2024-07-24 19:55:12.656548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.422 [2024-07-24 19:55:12.656562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.422 [2024-07-24 19:55:12.656576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.422 [2024-07-24 19:55:12.656604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.422 qpair failed and we were unable to recover it. 00:25:55.422 [2024-07-24 19:55:12.666379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.422 [2024-07-24 19:55:12.666482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.422 [2024-07-24 19:55:12.666507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.422 [2024-07-24 19:55:12.666521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.422 [2024-07-24 19:55:12.666534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.422 [2024-07-24 19:55:12.666562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.422 qpair failed and we were unable to recover it. 00:25:55.422 [2024-07-24 19:55:12.676414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.422 [2024-07-24 19:55:12.676522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.422 [2024-07-24 19:55:12.676547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.422 [2024-07-24 19:55:12.676561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.422 [2024-07-24 19:55:12.676573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.422 [2024-07-24 19:55:12.676602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.422 qpair failed and we were unable to recover it. 00:25:55.422 [2024-07-24 19:55:12.686483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.422 [2024-07-24 19:55:12.686612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.422 [2024-07-24 19:55:12.686637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.422 [2024-07-24 19:55:12.686652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.422 [2024-07-24 19:55:12.686665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.422 [2024-07-24 19:55:12.686694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.422 qpair failed and we were unable to recover it. 00:25:55.422 [2024-07-24 19:55:12.696487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.422 [2024-07-24 19:55:12.696590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.422 [2024-07-24 19:55:12.696616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.422 [2024-07-24 19:55:12.696630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.422 [2024-07-24 19:55:12.696642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.422 [2024-07-24 19:55:12.696670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.422 qpair failed and we were unable to recover it. 00:25:55.422 [2024-07-24 19:55:12.706522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.422 [2024-07-24 19:55:12.706633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.422 [2024-07-24 19:55:12.706659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.422 [2024-07-24 19:55:12.706674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.423 [2024-07-24 19:55:12.706687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.423 [2024-07-24 19:55:12.706715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.423 qpair failed and we were unable to recover it. 00:25:55.423 [2024-07-24 19:55:12.716546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.423 [2024-07-24 19:55:12.716653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.423 [2024-07-24 19:55:12.716678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.423 [2024-07-24 19:55:12.716692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.423 [2024-07-24 19:55:12.716704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.423 [2024-07-24 19:55:12.716732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.423 qpair failed and we were unable to recover it. 00:25:55.423 [2024-07-24 19:55:12.726592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.423 [2024-07-24 19:55:12.726702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.423 [2024-07-24 19:55:12.726728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.423 [2024-07-24 19:55:12.726743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.423 [2024-07-24 19:55:12.726761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.423 [2024-07-24 19:55:12.726792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.423 qpair failed and we were unable to recover it. 00:25:55.423 [2024-07-24 19:55:12.736592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.423 [2024-07-24 19:55:12.736696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.423 [2024-07-24 19:55:12.736721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.423 [2024-07-24 19:55:12.736736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.423 [2024-07-24 19:55:12.736749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.423 [2024-07-24 19:55:12.736777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.423 qpair failed and we were unable to recover it. 00:25:55.423 [2024-07-24 19:55:12.746615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.423 [2024-07-24 19:55:12.746725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.423 [2024-07-24 19:55:12.746750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.423 [2024-07-24 19:55:12.746765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.423 [2024-07-24 19:55:12.746779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.423 [2024-07-24 19:55:12.746808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.423 qpair failed and we were unable to recover it. 00:25:55.423 [2024-07-24 19:55:12.756654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.423 [2024-07-24 19:55:12.756765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.423 [2024-07-24 19:55:12.756791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.423 [2024-07-24 19:55:12.756806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.423 [2024-07-24 19:55:12.756819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.423 [2024-07-24 19:55:12.756850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.423 qpair failed and we were unable to recover it. 00:25:55.423 [2024-07-24 19:55:12.766733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.423 [2024-07-24 19:55:12.766880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.423 [2024-07-24 19:55:12.766906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.423 [2024-07-24 19:55:12.766920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.423 [2024-07-24 19:55:12.766933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.423 [2024-07-24 19:55:12.766962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.423 qpair failed and we were unable to recover it. 00:25:55.423 [2024-07-24 19:55:12.776734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.423 [2024-07-24 19:55:12.776841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.423 [2024-07-24 19:55:12.776867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.423 [2024-07-24 19:55:12.776881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.423 [2024-07-24 19:55:12.776894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.423 [2024-07-24 19:55:12.776922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.423 qpair failed and we were unable to recover it. 00:25:55.423 [2024-07-24 19:55:12.786748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.423 [2024-07-24 19:55:12.786859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.423 [2024-07-24 19:55:12.786884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.423 [2024-07-24 19:55:12.786899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.423 [2024-07-24 19:55:12.786912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.423 [2024-07-24 19:55:12.786940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.423 qpair failed and we were unable to recover it. 00:25:55.423 [2024-07-24 19:55:12.796809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.423 [2024-07-24 19:55:12.796915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.423 [2024-07-24 19:55:12.796940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.423 [2024-07-24 19:55:12.796954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.423 [2024-07-24 19:55:12.796967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.423 [2024-07-24 19:55:12.796997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.423 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-24 19:55:12.806781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.683 [2024-07-24 19:55:12.806887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.683 [2024-07-24 19:55:12.806913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.683 [2024-07-24 19:55:12.806928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.683 [2024-07-24 19:55:12.806941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.683 [2024-07-24 19:55:12.806969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-24 19:55:12.816867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.683 [2024-07-24 19:55:12.816976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.683 [2024-07-24 19:55:12.817002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.683 [2024-07-24 19:55:12.817023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.683 [2024-07-24 19:55:12.817037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.683 [2024-07-24 19:55:12.817067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-24 19:55:12.826843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.683 [2024-07-24 19:55:12.826950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.683 [2024-07-24 19:55:12.826975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.683 [2024-07-24 19:55:12.826990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.683 [2024-07-24 19:55:12.827003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.683 [2024-07-24 19:55:12.827032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-24 19:55:12.836925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.683 [2024-07-24 19:55:12.837061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.683 [2024-07-24 19:55:12.837086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.683 [2024-07-24 19:55:12.837100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.683 [2024-07-24 19:55:12.837113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.683 [2024-07-24 19:55:12.837141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-24 19:55:12.846906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.683 [2024-07-24 19:55:12.847015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.683 [2024-07-24 19:55:12.847041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.683 [2024-07-24 19:55:12.847056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.683 [2024-07-24 19:55:12.847069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.683 [2024-07-24 19:55:12.847097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-24 19:55:12.856931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.683 [2024-07-24 19:55:12.857028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.683 [2024-07-24 19:55:12.857053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.683 [2024-07-24 19:55:12.857068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.683 [2024-07-24 19:55:12.857080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.683 [2024-07-24 19:55:12.857109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-24 19:55:12.866951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.683 [2024-07-24 19:55:12.867051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.683 [2024-07-24 19:55:12.867076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.683 [2024-07-24 19:55:12.867090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.683 [2024-07-24 19:55:12.867103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.683 [2024-07-24 19:55:12.867131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-24 19:55:12.877042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.683 [2024-07-24 19:55:12.877150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.683 [2024-07-24 19:55:12.877175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.683 [2024-07-24 19:55:12.877190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.683 [2024-07-24 19:55:12.877203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.683 [2024-07-24 19:55:12.877231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-24 19:55:12.887035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.683 [2024-07-24 19:55:12.887173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.683 [2024-07-24 19:55:12.887198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.683 [2024-07-24 19:55:12.887213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.683 [2024-07-24 19:55:12.887226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.683 [2024-07-24 19:55:12.887261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-24 19:55:12.897175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.683 [2024-07-24 19:55:12.897282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.683 [2024-07-24 19:55:12.897308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.683 [2024-07-24 19:55:12.897323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.683 [2024-07-24 19:55:12.897335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.683 [2024-07-24 19:55:12.897364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-24 19:55:12.907120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.683 [2024-07-24 19:55:12.907218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.683 [2024-07-24 19:55:12.907250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.683 [2024-07-24 19:55:12.907273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.683 [2024-07-24 19:55:12.907288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.683 [2024-07-24 19:55:12.907317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.683 [2024-07-24 19:55:12.917122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.683 [2024-07-24 19:55:12.917224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.683 [2024-07-24 19:55:12.917255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.683 [2024-07-24 19:55:12.917271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.683 [2024-07-24 19:55:12.917283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.683 [2024-07-24 19:55:12.917312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.683 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-24 19:55:12.927144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.684 [2024-07-24 19:55:12.927257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.684 [2024-07-24 19:55:12.927282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.684 [2024-07-24 19:55:12.927297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.684 [2024-07-24 19:55:12.927310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.684 [2024-07-24 19:55:12.927339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-24 19:55:12.937167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.684 [2024-07-24 19:55:12.937272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.684 [2024-07-24 19:55:12.937307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.684 [2024-07-24 19:55:12.937321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.684 [2024-07-24 19:55:12.937334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.684 [2024-07-24 19:55:12.937362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-24 19:55:12.947200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.684 [2024-07-24 19:55:12.947319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.684 [2024-07-24 19:55:12.947344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.684 [2024-07-24 19:55:12.947359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.684 [2024-07-24 19:55:12.947372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.684 [2024-07-24 19:55:12.947401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-24 19:55:12.957237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.684 [2024-07-24 19:55:12.957352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.684 [2024-07-24 19:55:12.957377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.684 [2024-07-24 19:55:12.957391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.684 [2024-07-24 19:55:12.957405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.684 [2024-07-24 19:55:12.957433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-24 19:55:12.967290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.684 [2024-07-24 19:55:12.967401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.684 [2024-07-24 19:55:12.967426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.684 [2024-07-24 19:55:12.967440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.684 [2024-07-24 19:55:12.967453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.684 [2024-07-24 19:55:12.967483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-24 19:55:12.977336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.684 [2024-07-24 19:55:12.977449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.684 [2024-07-24 19:55:12.977474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.684 [2024-07-24 19:55:12.977489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.684 [2024-07-24 19:55:12.977502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.684 [2024-07-24 19:55:12.977530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-24 19:55:12.987314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.684 [2024-07-24 19:55:12.987407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.684 [2024-07-24 19:55:12.987432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.684 [2024-07-24 19:55:12.987446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.684 [2024-07-24 19:55:12.987459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.684 [2024-07-24 19:55:12.987487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-24 19:55:12.997359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.684 [2024-07-24 19:55:12.997465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.684 [2024-07-24 19:55:12.997491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.684 [2024-07-24 19:55:12.997513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.684 [2024-07-24 19:55:12.997527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.684 [2024-07-24 19:55:12.997555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-24 19:55:13.007396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.684 [2024-07-24 19:55:13.007509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.684 [2024-07-24 19:55:13.007534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.684 [2024-07-24 19:55:13.007549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.684 [2024-07-24 19:55:13.007562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.684 [2024-07-24 19:55:13.007590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-24 19:55:13.017404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.684 [2024-07-24 19:55:13.017557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.684 [2024-07-24 19:55:13.017583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.684 [2024-07-24 19:55:13.017597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.684 [2024-07-24 19:55:13.017610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.684 [2024-07-24 19:55:13.017639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-24 19:55:13.027429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.684 [2024-07-24 19:55:13.027528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.684 [2024-07-24 19:55:13.027553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.684 [2024-07-24 19:55:13.027568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.684 [2024-07-24 19:55:13.027581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.684 [2024-07-24 19:55:13.027609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-24 19:55:13.037499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.684 [2024-07-24 19:55:13.037607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.684 [2024-07-24 19:55:13.037632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.684 [2024-07-24 19:55:13.037647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.684 [2024-07-24 19:55:13.037660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.684 [2024-07-24 19:55:13.037688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-24 19:55:13.047487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.684 [2024-07-24 19:55:13.047598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.684 [2024-07-24 19:55:13.047624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.684 [2024-07-24 19:55:13.047638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.684 [2024-07-24 19:55:13.047651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.684 [2024-07-24 19:55:13.047681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.684 qpair failed and we were unable to recover it. 00:25:55.684 [2024-07-24 19:55:13.057529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.685 [2024-07-24 19:55:13.057627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.685 [2024-07-24 19:55:13.057652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.685 [2024-07-24 19:55:13.057667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.685 [2024-07-24 19:55:13.057680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.685 [2024-07-24 19:55:13.057708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.685 qpair failed and we were unable to recover it. 00:25:55.944 [2024-07-24 19:55:13.067581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.944 [2024-07-24 19:55:13.067685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.944 [2024-07-24 19:55:13.067710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.944 [2024-07-24 19:55:13.067725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.944 [2024-07-24 19:55:13.067737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.944 [2024-07-24 19:55:13.067765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.944 qpair failed and we were unable to recover it. 00:25:55.944 [2024-07-24 19:55:13.077582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.944 [2024-07-24 19:55:13.077687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.944 [2024-07-24 19:55:13.077712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.944 [2024-07-24 19:55:13.077727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.944 [2024-07-24 19:55:13.077739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.944 [2024-07-24 19:55:13.077767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.944 qpair failed and we were unable to recover it. 00:25:55.944 [2024-07-24 19:55:13.087616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.944 [2024-07-24 19:55:13.087730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.944 [2024-07-24 19:55:13.087761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.944 [2024-07-24 19:55:13.087776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.944 [2024-07-24 19:55:13.087790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.944 [2024-07-24 19:55:13.087818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.944 qpair failed and we were unable to recover it. 00:25:55.944 [2024-07-24 19:55:13.097631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.944 [2024-07-24 19:55:13.097739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.944 [2024-07-24 19:55:13.097763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.944 [2024-07-24 19:55:13.097777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.944 [2024-07-24 19:55:13.097790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.944 [2024-07-24 19:55:13.097818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.944 qpair failed and we were unable to recover it. 00:25:55.944 [2024-07-24 19:55:13.107648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.944 [2024-07-24 19:55:13.107753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.944 [2024-07-24 19:55:13.107778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.944 [2024-07-24 19:55:13.107793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.944 [2024-07-24 19:55:13.107805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.944 [2024-07-24 19:55:13.107834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.944 qpair failed and we were unable to recover it. 00:25:55.944 [2024-07-24 19:55:13.117725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.944 [2024-07-24 19:55:13.117842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.944 [2024-07-24 19:55:13.117866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.944 [2024-07-24 19:55:13.117879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.944 [2024-07-24 19:55:13.117891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.944 [2024-07-24 19:55:13.117919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.944 qpair failed and we were unable to recover it. 00:25:55.944 [2024-07-24 19:55:13.127703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.944 [2024-07-24 19:55:13.127812] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.944 [2024-07-24 19:55:13.127837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.944 [2024-07-24 19:55:13.127852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.944 [2024-07-24 19:55:13.127864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.944 [2024-07-24 19:55:13.127893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.944 qpair failed and we were unable to recover it. 00:25:55.944 [2024-07-24 19:55:13.137720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.944 [2024-07-24 19:55:13.137824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.944 [2024-07-24 19:55:13.137849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.944 [2024-07-24 19:55:13.137864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.944 [2024-07-24 19:55:13.137877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.944 [2024-07-24 19:55:13.137905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.944 qpair failed and we were unable to recover it. 00:25:55.944 [2024-07-24 19:55:13.147833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.944 [2024-07-24 19:55:13.147975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.944 [2024-07-24 19:55:13.148000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.944 [2024-07-24 19:55:13.148014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.944 [2024-07-24 19:55:13.148027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.944 [2024-07-24 19:55:13.148055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.944 qpair failed and we were unable to recover it. 00:25:55.944 [2024-07-24 19:55:13.157804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.944 [2024-07-24 19:55:13.157912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.944 [2024-07-24 19:55:13.157937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.944 [2024-07-24 19:55:13.157952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.944 [2024-07-24 19:55:13.157965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.944 [2024-07-24 19:55:13.157993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.944 qpair failed and we were unable to recover it. 00:25:55.944 [2024-07-24 19:55:13.167821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.944 [2024-07-24 19:55:13.167927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.945 [2024-07-24 19:55:13.167951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.945 [2024-07-24 19:55:13.167965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.945 [2024-07-24 19:55:13.167978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.945 [2024-07-24 19:55:13.168007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.945 qpair failed and we were unable to recover it. 00:25:55.945 [2024-07-24 19:55:13.177865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.945 [2024-07-24 19:55:13.178019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.945 [2024-07-24 19:55:13.178052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.945 [2024-07-24 19:55:13.178067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.945 [2024-07-24 19:55:13.178081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.945 [2024-07-24 19:55:13.178109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.945 qpair failed and we were unable to recover it. 00:25:55.945 [2024-07-24 19:55:13.187890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.945 [2024-07-24 19:55:13.188019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.945 [2024-07-24 19:55:13.188044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.945 [2024-07-24 19:55:13.188059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.945 [2024-07-24 19:55:13.188072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.945 [2024-07-24 19:55:13.188102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.945 qpair failed and we were unable to recover it. 00:25:55.945 [2024-07-24 19:55:13.197913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.945 [2024-07-24 19:55:13.198018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.945 [2024-07-24 19:55:13.198044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.945 [2024-07-24 19:55:13.198058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.945 [2024-07-24 19:55:13.198071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.945 [2024-07-24 19:55:13.198099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.945 qpair failed and we were unable to recover it. 00:25:55.945 [2024-07-24 19:55:13.207977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.945 [2024-07-24 19:55:13.208101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.945 [2024-07-24 19:55:13.208126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.945 [2024-07-24 19:55:13.208141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.945 [2024-07-24 19:55:13.208154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.945 [2024-07-24 19:55:13.208182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.945 qpair failed and we were unable to recover it. 00:25:55.945 [2024-07-24 19:55:13.217971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.945 [2024-07-24 19:55:13.218073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.945 [2024-07-24 19:55:13.218098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.945 [2024-07-24 19:55:13.218113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.945 [2024-07-24 19:55:13.218126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.945 [2024-07-24 19:55:13.218160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.945 qpair failed and we were unable to recover it. 00:25:55.945 [2024-07-24 19:55:13.228036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.945 [2024-07-24 19:55:13.228159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.945 [2024-07-24 19:55:13.228184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.945 [2024-07-24 19:55:13.228198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.945 [2024-07-24 19:55:13.228212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.945 [2024-07-24 19:55:13.228240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.945 qpair failed and we were unable to recover it. 00:25:55.945 [2024-07-24 19:55:13.238046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.945 [2024-07-24 19:55:13.238156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.945 [2024-07-24 19:55:13.238181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.945 [2024-07-24 19:55:13.238196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.945 [2024-07-24 19:55:13.238209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.945 [2024-07-24 19:55:13.238237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.945 qpair failed and we were unable to recover it. 00:25:55.945 [2024-07-24 19:55:13.248138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.945 [2024-07-24 19:55:13.248252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.945 [2024-07-24 19:55:13.248278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.945 [2024-07-24 19:55:13.248292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.945 [2024-07-24 19:55:13.248305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.945 [2024-07-24 19:55:13.248334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.945 qpair failed and we were unable to recover it. 00:25:55.945 [2024-07-24 19:55:13.258086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.945 [2024-07-24 19:55:13.258186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.945 [2024-07-24 19:55:13.258211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.945 [2024-07-24 19:55:13.258225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.945 [2024-07-24 19:55:13.258238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.945 [2024-07-24 19:55:13.258274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.945 qpair failed and we were unable to recover it. 00:25:55.945 [2024-07-24 19:55:13.268095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.945 [2024-07-24 19:55:13.268223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.945 [2024-07-24 19:55:13.268260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.945 [2024-07-24 19:55:13.268277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.945 [2024-07-24 19:55:13.268290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.945 [2024-07-24 19:55:13.268318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.945 qpair failed and we were unable to recover it. 00:25:55.945 [2024-07-24 19:55:13.278137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.945 [2024-07-24 19:55:13.278252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.945 [2024-07-24 19:55:13.278278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.945 [2024-07-24 19:55:13.278292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.945 [2024-07-24 19:55:13.278304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5b5250 00:25:55.945 [2024-07-24 19:55:13.278334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:55.945 qpair failed and we were unable to recover it. 00:25:55.945 [2024-07-24 19:55:13.288187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.945 [2024-07-24 19:55:13.288302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.945 [2024-07-24 19:55:13.288334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.945 [2024-07-24 19:55:13.288350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.945 [2024-07-24 19:55:13.288363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fce84000b90 00:25:55.945 [2024-07-24 19:55:13.288396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.945 qpair failed and we were unable to recover it. 00:25:55.945 [2024-07-24 19:55:13.298232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.945 [2024-07-24 19:55:13.298337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.945 [2024-07-24 19:55:13.298365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.946 [2024-07-24 19:55:13.298379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.946 [2024-07-24 19:55:13.298392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fce84000b90 00:25:55.946 [2024-07-24 19:55:13.298423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:55.946 qpair failed and we were unable to recover it. 00:25:55.946 [2024-07-24 19:55:13.298531] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:55.946 A controller has encountered a failure and is being reset. 00:25:55.946 [2024-07-24 19:55:13.308233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.946 [2024-07-24 19:55:13.308349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.946 [2024-07-24 19:55:13.308381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.946 [2024-07-24 19:55:13.308402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.946 [2024-07-24 19:55:13.308417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fce7c000b90 00:25:55.946 [2024-07-24 19:55:13.308449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:55.946 qpair failed and we were unable to recover it. 00:25:55.946 [2024-07-24 19:55:13.318303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:55.946 [2024-07-24 19:55:13.318424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:55.946 [2024-07-24 19:55:13.318451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:55.946 [2024-07-24 19:55:13.318467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:55.946 [2024-07-24 19:55:13.318479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fce7c000b90 00:25:55.946 [2024-07-24 19:55:13.318512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:55.946 qpair failed and we were unable to recover it. 00:25:56.202 [2024-07-24 19:55:13.328348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.202 [2024-07-24 19:55:13.328487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.202 [2024-07-24 19:55:13.328520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.202 [2024-07-24 19:55:13.328536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.202 [2024-07-24 19:55:13.328550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fce8c000b90 00:25:56.202 [2024-07-24 19:55:13.328583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.202 qpair failed and we were unable to recover it. 00:25:56.202 [2024-07-24 19:55:13.338373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:56.202 [2024-07-24 19:55:13.338508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:56.202 [2024-07-24 19:55:13.338536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:56.202 [2024-07-24 19:55:13.338550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:56.202 [2024-07-24 19:55:13.338564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fce8c000b90 00:25:56.202 [2024-07-24 19:55:13.338595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.202 qpair failed and we were unable to recover it. 00:25:56.202 Controller properly reset. 00:25:56.202 Initializing NVMe Controllers 00:25:56.202 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:56.202 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:56.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:56.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:56.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:56.202 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:56.202 Initialization complete. Launching workers. 00:25:56.202 Starting thread on core 1 00:25:56.202 Starting thread on core 2 00:25:56.202 Starting thread on core 3 00:25:56.202 Starting thread on core 0 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:56.202 00:25:56.202 real 0m10.780s 00:25:56.202 user 0m18.939s 00:25:56.202 sys 0m5.331s 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # xtrace_disable 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:56.202 ************************************ 00:25:56.202 END TEST nvmf_target_disconnect_tc2 00:25:56.202 ************************************ 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # nvmfcleanup 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:56.202 rmmod nvme_tcp 00:25:56.202 rmmod nvme_fabrics 00:25:56.202 rmmod nvme_keyring 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # '[' -n 1285643 ']' 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@494 -- # killprocess 1285643 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' -z 1285643 ']' 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # kill -0 1285643 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # uname 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1285643 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # process_name=reactor_4 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@961 -- # '[' reactor_4 = sudo ']' 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1285643' 00:25:56.202 killing process with pid 1285643 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # kill 1285643 00:25:56.202 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@975 -- # wait 1285643 00:25:56.460 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:25:56.460 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:25:56.460 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:25:56.460 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:56.460 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@282 -- # remove_spdk_ns 00:25:56.460 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.460 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.460 19:55:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.989 19:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:25:58.989 00:25:58.989 real 0m15.694s 00:25:58.989 user 0m44.916s 00:25:58.989 sys 0m7.382s 00:25:58.989 19:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # xtrace_disable 00:25:58.989 19:55:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:58.989 ************************************ 00:25:58.989 END TEST nvmf_target_disconnect 00:25:58.989 ************************************ 00:25:58.989 19:55:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:58.989 00:25:58.989 real 5m1.878s 00:25:58.989 user 10m44.824s 00:25:58.989 sys 1m11.230s 00:25:58.989 19:55:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # xtrace_disable 00:25:58.990 19:55:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 ************************************ 00:25:58.990 END TEST nvmf_host 00:25:58.990 ************************************ 00:25:58.990 19:55:15 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:25:58.990 19:55:15 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:25:58.990 19:55:15 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:58.990 19:55:15 nvmf_tcp -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:25:58.990 19:55:15 nvmf_tcp -- common/autotest_common.sh@1108 -- # xtrace_disable 00:25:58.990 19:55:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 ************************************ 00:25:58.990 START TEST nvmf_target_core_interrupt_mode 00:25:58.990 ************************************ 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:25:58.990 * Looking for test storage... 00:25:58.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1108 -- # xtrace_disable 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:25:58.990 ************************************ 00:25:58.990 START TEST nvmf_abort 00:25:58.990 ************************************ 00:25:58.990 19:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:25:58.990 * Looking for test storage... 00:25:58.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:58.990 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@452 -- # prepare_net_devs 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@414 -- # local -g is_hw=no 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # remove_spdk_ns 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@289 -- # xtrace_disable 00:25:58.991 19:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:00.891 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@295 -- # pci_devs=() 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@295 -- # local -a pci_devs 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@296 -- # pci_net_devs=() 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # pci_drivers=() 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # local -A pci_drivers 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # net_devs=() 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # local -ga net_devs 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # e810=() 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # local -ga e810 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@301 -- # x722=() 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@301 -- # local -ga x722 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # mlx=() 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # local -ga mlx 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:00.892 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:00.892 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@394 -- # [[ up == up ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:00.892 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@394 -- # [[ up == up ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:00.892 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # is_hw=yes 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.892 19:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:00.892 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:00.892 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:00.892 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:26:00.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:26:00.893 00:26:00.893 --- 10.0.0.2 ping statistics --- 00:26:00.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.893 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:00.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:26:00.893 00:26:00.893 --- 10.0.0.1 ping statistics --- 00:26:00.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.893 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@426 -- # return 0 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@725 -- # xtrace_disable 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@485 -- # nvmfpid=1288435 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@486 -- # waitforlisten 1288435 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@832 -- # '[' -z 1288435 ']' 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local max_retries=100 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@841 -- # xtrace_disable 00:26:00.893 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:00.893 [2024-07-24 19:55:18.158839] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:00.893 [2024-07-24 19:55:18.159872] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:26:00.893 [2024-07-24 19:55:18.159924] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.893 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.893 [2024-07-24 19:55:18.227527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:01.151 [2024-07-24 19:55:18.334384] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.151 [2024-07-24 19:55:18.334439] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.151 [2024-07-24 19:55:18.334466] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.151 [2024-07-24 19:55:18.334478] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.151 [2024-07-24 19:55:18.334494] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.151 [2024-07-24 19:55:18.334824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:01.151 [2024-07-24 19:55:18.334882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:01.151 [2024-07-24 19:55:18.334886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.151 [2024-07-24 19:55:18.426164] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:01.151 [2024-07-24 19:55:18.426388] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:01.151 [2024-07-24 19:55:18.437335] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:01.151 [2024-07-24 19:55:18.437615] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:01.151 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:26:01.151 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@865 -- # return 0 00:26:01.151 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:26:01.151 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@731 -- # xtrace_disable 00:26:01.151 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.152 [2024-07-24 19:55:18.487661] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.152 Malloc0 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.152 Delay0 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:01.152 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.410 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:01.410 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:26:01.410 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:01.410 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.410 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:01.410 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:01.410 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:01.410 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.410 [2024-07-24 19:55:18.547835] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.410 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:01.410 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:01.410 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:01.410 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:01.410 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:01.410 19:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:26:01.410 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.410 [2024-07-24 19:55:18.603600] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:26:03.308 Initializing NVMe Controllers 00:26:03.308 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:26:03.308 controller IO queue size 128 less than required 00:26:03.308 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:26:03.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:26:03.308 Initialization complete. Launching workers. 00:26:03.308 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33017 00:26:03.308 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33074, failed to submit 66 00:26:03.308 success 33017, unsuccess 57, failed 0 00:26:03.308 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:03.308 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:03.308 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:03.308 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:03.308 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:03.308 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:26:03.308 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # nvmfcleanup 00:26:03.308 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:26:03.308 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:03.308 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:26:03.308 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:03.308 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:03.308 rmmod nvme_tcp 00:26:03.308 rmmod nvme_fabrics 00:26:03.566 rmmod nvme_keyring 00:26:03.566 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:03.566 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:26:03.566 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:26:03.566 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # '[' -n 1288435 ']' 00:26:03.566 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@494 -- # killprocess 1288435 00:26:03.566 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@951 -- # '[' -z 1288435 ']' 00:26:03.566 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@955 -- # kill -0 1288435 00:26:03.566 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # uname 00:26:03.566 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:26:03.566 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1288435 00:26:03.566 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:26:03.566 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:26:03.566 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1288435' 00:26:03.566 killing process with pid 1288435 00:26:03.566 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # kill 1288435 00:26:03.566 19:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@975 -- # wait 1288435 00:26:03.825 19:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:26:03.825 19:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:26:03.825 19:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:26:03.825 19:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:03.825 19:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@282 -- # remove_spdk_ns 00:26:03.825 19:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.825 19:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:03.825 19:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.722 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:26:05.722 00:26:05.722 real 0m7.091s 00:26:05.722 user 0m8.543s 00:26:05.722 sys 0m3.035s 00:26:05.722 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # xtrace_disable 00:26:05.722 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:26:05.722 ************************************ 00:26:05.722 END TEST nvmf_abort 00:26:05.722 ************************************ 00:26:05.980 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:05.980 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:26:05.980 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1108 -- # xtrace_disable 00:26:05.980 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:05.980 ************************************ 00:26:05.980 START TEST nvmf_ns_hotplug_stress 00:26:05.980 ************************************ 00:26:05.980 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:26:05.980 * Looking for test storage... 00:26:05.980 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@452 -- # prepare_net_devs 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # local -g is_hw=no 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # remove_spdk_ns 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # xtrace_disable 00:26:05.981 19:55:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # pci_devs=() 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -a pci_devs 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # pci_net_devs=() 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # pci_drivers=() 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -A pci_drivers 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # net_devs=() 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # local -ga net_devs 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # e810=() 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # local -ga e810 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # x722=() 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # local -ga x722 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # mlx=() 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # local -ga mlx 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:07.880 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:07.880 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # [[ up == up ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:07.880 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # [[ up == up ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:07.880 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # is_hw=yes 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:07.880 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:07.881 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:26:07.881 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:07.881 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.881 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:26:07.881 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:26:07.881 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:26:07.881 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:07.881 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:07.881 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:26:08.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:08.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:26:08.139 00:26:08.139 --- 10.0.0.2 ping statistics --- 00:26:08.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.139 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:08.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:08.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:26:08.139 00:26:08.139 --- 10.0.0.1 ping statistics --- 00:26:08.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:08.139 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # return 0 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@725 -- # xtrace_disable 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@485 -- # nvmfpid=1290722 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@486 -- # waitforlisten 1290722 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # '[' -z 1290722 ']' 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local max_retries=100 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@841 -- # xtrace_disable 00:26:08.139 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:08.139 [2024-07-24 19:55:25.397271] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:08.139 [2024-07-24 19:55:25.398518] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:26:08.139 [2024-07-24 19:55:25.398605] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.139 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.139 [2024-07-24 19:55:25.464945] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:08.397 [2024-07-24 19:55:25.574648] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:08.397 [2024-07-24 19:55:25.574700] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:08.397 [2024-07-24 19:55:25.574724] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:08.397 [2024-07-24 19:55:25.574735] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:08.397 [2024-07-24 19:55:25.574745] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:08.397 [2024-07-24 19:55:25.574876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:08.397 [2024-07-24 19:55:25.574935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:08.397 [2024-07-24 19:55:25.574938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.397 [2024-07-24 19:55:25.662774] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:08.397 [2024-07-24 19:55:25.662992] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:08.397 [2024-07-24 19:55:25.674332] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:08.397 [2024-07-24 19:55:25.674586] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:26:08.397 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:26:08.397 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@865 -- # return 0 00:26:08.397 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:26:08.397 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@731 -- # xtrace_disable 00:26:08.397 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:08.397 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.397 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:26:08.397 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:08.655 [2024-07-24 19:55:25.951667] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.655 19:55:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:08.912 19:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.170 [2024-07-24 19:55:26.443947] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:09.170 19:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:09.427 19:55:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:26:09.685 Malloc0 00:26:09.685 19:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:09.943 Delay0 00:26:09.943 19:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:10.201 19:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:26:10.458 NULL1 00:26:10.458 19:55:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:26:10.715 19:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1291062 00:26:10.715 19:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:26:10.715 19:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:10.715 19:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:10.715 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.085 Read completed with error (sct=0, sc=11) 00:26:12.085 19:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:12.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:12.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:12.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:12.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:12.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:12.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:12.085 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:12.085 19:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:26:12.085 19:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:26:12.342 true 00:26:12.342 19:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:12.342 19:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:13.275 19:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:13.573 19:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:26:13.573 19:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:26:13.573 true 00:26:13.854 19:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:13.854 19:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:13.854 19:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:14.111 19:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:26:14.111 19:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:26:14.369 true 00:26:14.369 19:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:14.369 19:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:15.301 19:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:15.559 19:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:26:15.559 19:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:26:15.559 true 00:26:15.816 19:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:15.816 19:55:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:16.076 19:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:16.333 19:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:26:16.333 19:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:26:16.333 true 00:26:16.333 19:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:16.333 19:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:17.265 19:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:17.522 19:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:26:17.522 19:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:26:17.779 true 00:26:17.779 19:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:17.779 19:55:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:18.037 19:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:18.294 19:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:26:18.294 19:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:26:18.552 true 00:26:18.552 19:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:18.552 19:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:19.116 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:19.373 19:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:19.630 19:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:26:19.630 19:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:26:19.630 true 00:26:19.887 19:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:19.887 19:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:19.887 19:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:20.144 19:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:26:20.144 19:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:26:20.402 true 00:26:20.402 19:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:20.402 19:55:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:21.333 19:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:21.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:21.591 19:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:26:21.591 19:55:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:26:21.847 true 00:26:21.847 19:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:21.847 19:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:22.102 19:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:22.359 19:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:26:22.359 19:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:26:22.615 true 00:26:22.615 19:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:22.615 19:55:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:23.545 19:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:23.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:23.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:23.802 19:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:26:23.802 19:55:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:26:24.059 true 00:26:24.059 19:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:24.059 19:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:24.315 19:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:24.572 19:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:26:24.572 19:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:26:24.572 true 00:26:24.572 19:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:24.572 19:55:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:25.940 19:55:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:25.940 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:25.940 19:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:26:25.940 19:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:26:26.196 true 00:26:26.196 19:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:26.196 19:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:26.452 19:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:26.707 19:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:26:26.707 19:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:26:26.964 true 00:26:26.964 19:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:26.964 19:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:27.896 19:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:27.896 19:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:26:27.896 19:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:26:28.235 true 00:26:28.235 19:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:28.235 19:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:28.492 19:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:28.749 19:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:26:28.749 19:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:26:29.006 true 00:26:29.006 19:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:29.006 19:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:29.937 19:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:29.937 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:29.937 19:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:26:29.937 19:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:26:30.194 true 00:26:30.194 19:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:30.194 19:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:30.451 19:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:30.709 19:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:26:30.709 19:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:26:30.966 true 00:26:30.966 19:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:30.966 19:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:31.899 19:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:31.899 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:32.157 19:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:26:32.157 19:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:26:32.414 true 00:26:32.414 19:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:32.414 19:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:32.672 19:55:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:32.930 19:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:26:32.930 19:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:26:33.187 true 00:26:33.187 19:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:33.187 19:55:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:34.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:34.119 19:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:34.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:34.119 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:34.375 19:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:26:34.375 19:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:26:34.632 true 00:26:34.632 19:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:34.632 19:55:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:34.889 19:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:35.146 19:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:26:35.146 19:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:26:35.403 true 00:26:35.403 19:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:35.403 19:55:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:36.334 19:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:36.591 19:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:26:36.591 19:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:26:36.591 true 00:26:36.591 19:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:36.591 19:55:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:36.848 19:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:37.105 19:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:26:37.105 19:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:26:37.362 true 00:26:37.362 19:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:37.362 19:55:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:38.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:38.294 19:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:38.294 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:38.552 19:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:26:38.552 19:55:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:26:38.809 true 00:26:38.809 19:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:38.809 19:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:39.066 19:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:39.323 19:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:26:39.323 19:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:26:39.580 true 00:26:39.580 19:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:39.580 19:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:40.509 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:26:40.509 19:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:40.766 19:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:26:40.766 19:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:26:40.766 true 00:26:40.766 19:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:40.766 19:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:41.024 Initializing NVMe Controllers 00:26:41.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:41.024 Controller IO queue size 128, less than required. 00:26:41.024 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:41.024 Controller IO queue size 128, less than required. 00:26:41.024 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:41.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:41.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:41.024 Initialization complete. Launching workers. 00:26:41.024 ======================================================== 00:26:41.024 Latency(us) 00:26:41.024 Device Information : IOPS MiB/s Average min max 00:26:41.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 765.62 0.37 93285.80 2707.50 1015339.18 00:26:41.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12025.25 5.87 10643.96 3370.07 455154.29 00:26:41.024 ======================================================== 00:26:41.024 Total : 12790.87 6.25 15590.63 2707.50 1015339.18 00:26:41.024 00:26:41.024 19:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:41.281 19:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:26:41.281 19:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:26:41.539 true 00:26:41.539 19:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1291062 00:26:41.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1291062) - No such process 00:26:41.539 19:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1291062 00:26:41.539 19:55:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:41.796 19:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:42.053 19:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:26:42.053 19:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:26:42.053 19:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:26:42.053 19:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:42.053 19:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:26:42.310 null0 00:26:42.310 19:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:42.310 19:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:42.310 19:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:26:42.568 null1 00:26:42.568 19:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:42.568 19:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:42.568 19:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:26:42.827 null2 00:26:42.827 19:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:42.827 19:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:42.827 19:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:26:43.085 null3 00:26:43.085 19:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:43.085 19:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:43.085 19:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:26:43.342 null4 00:26:43.342 19:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:43.342 19:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:43.342 19:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:26:43.600 null5 00:26:43.600 19:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:43.600 19:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:43.600 19:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:26:43.858 null6 00:26:43.858 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:43.858 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:43.858 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:26:44.115 null7 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:44.115 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1295083 1295084 1295086 1295088 1295090 1295092 1295094 1295096 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.116 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:44.412 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:44.412 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:44.412 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:44.412 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:44.412 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:44.412 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:44.412 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:44.412 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:44.692 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:44.693 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:44.693 19:56:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:44.950 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:44.950 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:44.950 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:44.950 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:44.950 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:44.950 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:44.950 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:44.950 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:45.208 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.208 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.208 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:45.208 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.208 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.208 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:45.208 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.208 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.208 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:45.208 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.208 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.208 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:45.208 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.208 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.208 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.209 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.209 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:45.209 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:45.209 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.209 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.209 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:45.209 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.209 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.209 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:45.467 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:45.467 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:45.467 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:45.467 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:45.467 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:45.467 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:45.467 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:45.467 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:45.725 19:56:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:45.983 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:45.983 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:45.983 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:45.983 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:45.983 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:45.983 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:45.983 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:45.983 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.240 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:46.498 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:46.498 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:46.498 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:46.498 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:46.498 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:46.498 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:46.498 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:46.498 19:56:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:46.756 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:47.015 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:47.015 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:47.015 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:47.015 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:47.015 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:47.015 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:47.015 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:47.015 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:47.272 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.272 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.272 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:47.272 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.272 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.272 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:47.272 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.272 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.272 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:47.272 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.272 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.272 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:47.272 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.272 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.272 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:47.273 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.273 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.273 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:47.273 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.273 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.273 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:47.273 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.273 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.273 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:47.529 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:47.529 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:47.529 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:47.529 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:47.529 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:47.529 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:47.530 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:47.530 19:56:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:47.787 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:48.045 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.045 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.045 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.045 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.045 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:48.045 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:48.045 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:48.045 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:48.045 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:48.303 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:48.303 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:48.303 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:48.303 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:48.303 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:48.561 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:48.819 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:48.819 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:48.819 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:48.819 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:48.819 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:48.819 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:48.819 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:48.819 19:56:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:49.076 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.077 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:26:49.334 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:26:49.334 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:26:49.334 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:26:49.334 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:26:49.334 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:26:49.334 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:26:49.334 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:26:49.334 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:26:49.592 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # nvmfcleanup 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:49.593 rmmod nvme_tcp 00:26:49.593 rmmod nvme_fabrics 00:26:49.593 rmmod nvme_keyring 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # '[' -n 1290722 ']' 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # killprocess 1290722 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' -z 1290722 ']' 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # kill -0 1290722 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # uname 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1290722 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1290722' 00:26:49.593 killing process with pid 1290722 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # kill 1290722 00:26:49.593 19:56:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@975 -- # wait 1290722 00:26:49.851 19:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:26:49.851 19:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:26:49.851 19:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:26:49.851 19:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:49.851 19:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@282 -- # remove_spdk_ns 00:26:49.852 19:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.852 19:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.852 19:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:26:52.382 00:26:52.382 real 0m46.060s 00:26:52.382 user 3m5.344s 00:26:52.382 sys 0m25.379s 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # xtrace_disable 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:26:52.382 ************************************ 00:26:52.382 END TEST nvmf_ns_hotplug_stress 00:26:52.382 ************************************ 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1108 -- # xtrace_disable 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:26:52.382 ************************************ 00:26:52.382 START TEST nvmf_delete_subsystem 00:26:52.382 ************************************ 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:26:52.382 * Looking for test storage... 00:26:52.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@452 -- # prepare_net_devs 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # local -g is_hw=no 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # remove_spdk_ns 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # xtrace_disable 00:26:52.382 19:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # pci_devs=() 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -a pci_devs 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # pci_net_devs=() 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # pci_drivers=() 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -A pci_drivers 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # net_devs=() 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # local -ga net_devs 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # e810=() 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # local -ga e810 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # x722=() 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # local -ga x722 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # mlx=() 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # local -ga mlx 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:54.282 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:54.282 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # [[ up == up ]] 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:54.282 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.282 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # [[ up == up ]] 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:54.283 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # is_hw=yes 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:26:54.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:26:54.283 00:26:54.283 --- 10.0.0.2 ping statistics --- 00:26:54.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.283 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:26:54.283 00:26:54.283 --- 10.0.0.1 ping statistics --- 00:26:54.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.283 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # return 0 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@725 -- # xtrace_disable 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@485 -- # nvmfpid=1297714 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@486 -- # waitforlisten 1297714 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # '[' -z 1297714 ']' 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local max_retries=100 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@841 -- # xtrace_disable 00:26:54.283 19:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:54.283 [2024-07-24 19:56:11.377495] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:26:54.283 [2024-07-24 19:56:11.378619] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:26:54.283 [2024-07-24 19:56:11.378674] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.283 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.283 [2024-07-24 19:56:11.446887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:54.283 [2024-07-24 19:56:11.563117] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.283 [2024-07-24 19:56:11.563179] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.283 [2024-07-24 19:56:11.563195] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.283 [2024-07-24 19:56:11.563208] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.283 [2024-07-24 19:56:11.563219] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.283 [2024-07-24 19:56:11.563313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:54.283 [2024-07-24 19:56:11.563320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.541 [2024-07-24 19:56:11.660803] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:26:54.541 [2024-07-24 19:56:11.660807] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:26:54.541 [2024-07-24 19:56:11.661132] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@865 -- # return 0 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@731 -- # xtrace_disable 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:55.107 [2024-07-24 19:56:12.340005] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:55.107 [2024-07-24 19:56:12.368440] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:55.107 NULL1 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:55.107 Delay0 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1297867 00:26:55.107 19:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:26:55.107 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.107 [2024-07-24 19:56:12.434477] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:57.630 19:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:57.630 19:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:57.630 19:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:57.630 Read completed with error (sct=0, sc=8) 00:26:57.630 Write completed with error (sct=0, sc=8) 00:26:57.630 Read completed with error (sct=0, sc=8) 00:26:57.630 starting I/O failed: -6 00:26:57.630 Read completed with error (sct=0, sc=8) 00:26:57.630 Read completed with error (sct=0, sc=8) 00:26:57.630 Read completed with error (sct=0, sc=8) 00:26:57.630 Read completed with error (sct=0, sc=8) 00:26:57.630 starting I/O failed: -6 00:26:57.630 Read completed with error (sct=0, sc=8) 00:26:57.630 Read completed with error (sct=0, sc=8) 00:26:57.630 Read completed with error (sct=0, sc=8) 00:26:57.630 Read completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 [2024-07-24 19:56:14.596793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff0c800d330 is same with the state(6) to be set 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 starting I/O failed: -6 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 [2024-07-24 19:56:14.597524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9553e0 is same with the state(6) to be set 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Write completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.631 Read completed with error (sct=0, sc=8) 00:26:57.632 Read completed with error (sct=0, sc=8) 00:26:57.632 Read completed with error (sct=0, sc=8) 00:26:57.632 Write completed with error (sct=0, sc=8) 00:26:57.632 Write completed with error (sct=0, sc=8) 00:26:57.632 Read completed with error (sct=0, sc=8) 00:26:57.632 Write completed with error (sct=0, sc=8) 00:26:57.632 Read completed with error (sct=0, sc=8) 00:26:57.632 Write completed with error (sct=0, sc=8) 00:26:57.632 Read completed with error (sct=0, sc=8) 00:26:57.632 Read completed with error (sct=0, sc=8) 00:26:57.632 Read completed with error (sct=0, sc=8) 00:26:57.632 Read completed with error (sct=0, sc=8) 00:26:57.632 Read completed with error (sct=0, sc=8) 00:26:58.565 [2024-07-24 19:56:15.576220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x956ac0 is same with the state(6) to be set 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 [2024-07-24 19:56:15.598500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x955c20 is same with the state(6) to be set 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 [2024-07-24 19:56:15.598698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9555c0 is same with the state(6) to be set 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Read completed with error (sct=0, sc=8) 00:26:58.565 Write completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 [2024-07-24 19:56:15.599373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff0c800d000 is same with the state(6) to be set 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Write completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Write completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Write completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Write completed with error (sct=0, sc=8) 00:26:58.566 Read completed with error (sct=0, sc=8) 00:26:58.566 Write completed with error (sct=0, sc=8) 00:26:58.566 [2024-07-24 19:56:15.600020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff0c800d660 is same with the state(6) to be set 00:26:58.566 Initializing NVMe Controllers 00:26:58.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:58.566 Controller IO queue size 128, less than required. 00:26:58.566 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:58.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:26:58.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:26:58.566 Initialization complete. Launching workers. 00:26:58.566 ======================================================== 00:26:58.566 Latency(us) 00:26:58.566 Device Information : IOPS MiB/s Average min max 00:26:58.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.24 0.08 899482.68 441.69 1012818.99 00:26:58.566 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 167.24 0.08 901832.19 616.41 1012022.29 00:26:58.566 ======================================================== 00:26:58.566 Total : 335.48 0.16 900653.96 441.69 1012818.99 00:26:58.566 00:26:58.566 19:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:58.566 19:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:26:58.566 [2024-07-24 19:56:15.600502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x956ac0 (9): Bad file descriptor 00:26:58.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:26:58.566 19:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1297867 00:26:58.566 19:56:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1297867 00:26:58.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1297867) - No such process 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1297867 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # local es=0 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # valid_exec_arg wait 1297867 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@639 -- # local arg=wait 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@643 -- # type -t wait 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # wait 1297867 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # es=1 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:58.824 [2024-07-24 19:56:16.120095] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@562 -- # xtrace_disable 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1298387 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1298387 00:26:58.824 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:58.824 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.824 [2024-07-24 19:56:16.170426] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:59.390 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:59.390 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1298387 00:26:59.390 19:56:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:26:59.954 19:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:26:59.954 19:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1298387 00:26:59.954 19:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:00.520 19:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:00.520 19:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1298387 00:27:00.520 19:56:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:00.777 19:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:00.777 19:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1298387 00:27:00.777 19:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:01.342 19:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:01.342 19:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1298387 00:27:01.342 19:56:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:01.921 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:01.921 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1298387 00:27:01.921 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:27:01.921 Initializing NVMe Controllers 00:27:01.921 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.921 Controller IO queue size 128, less than required. 00:27:01.921 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:01.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:27:01.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:27:01.921 Initialization complete. Launching workers. 00:27:01.921 ======================================================== 00:27:01.921 Latency(us) 00:27:01.921 Device Information : IOPS MiB/s Average min max 00:27:01.921 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003638.50 1000173.20 1013428.67 00:27:01.921 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005299.26 1000216.25 1011291.80 00:27:01.921 ======================================================== 00:27:01.921 Total : 256.00 0.12 1004468.88 1000173.20 1013428.67 00:27:01.921 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1298387 00:27:02.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1298387) - No such process 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1298387 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # nvmfcleanup 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:02.532 rmmod nvme_tcp 00:27:02.532 rmmod nvme_fabrics 00:27:02.532 rmmod nvme_keyring 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # '[' -n 1297714 ']' 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # killprocess 1297714 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' -z 1297714 ']' 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # kill -0 1297714 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # uname 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1297714 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1297714' 00:27:02.532 killing process with pid 1297714 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # kill 1297714 00:27:02.532 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@975 -- # wait 1297714 00:27:02.791 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:27:02.791 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:27:02.791 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:27:02.791 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:02.791 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@282 -- # remove_spdk_ns 00:27:02.791 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.791 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.791 19:56:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.691 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:27:04.691 00:27:04.691 real 0m12.800s 00:27:04.691 user 0m23.923s 00:27:04.691 sys 0m4.372s 00:27:04.691 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # xtrace_disable 00:27:04.691 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:27:04.691 ************************************ 00:27:04.691 END TEST nvmf_delete_subsystem 00:27:04.691 ************************************ 00:27:04.691 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:04.691 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:27:04.691 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1108 -- # xtrace_disable 00:27:04.691 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:04.949 ************************************ 00:27:04.949 START TEST nvmf_host_management 00:27:04.949 ************************************ 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:27:04.949 * Looking for test storage... 00:27:04.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:04.949 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@452 -- # prepare_net_devs 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@414 -- # local -g is_hw=no 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # remove_spdk_ns 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@289 -- # xtrace_disable 00:27:04.950 19:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:06.849 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:06.849 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@295 -- # pci_devs=() 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@295 -- # local -a pci_devs 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@296 -- # pci_net_devs=() 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # pci_drivers=() 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # local -A pci_drivers 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # net_devs=() 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # local -ga net_devs 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # e810=() 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # local -ga e810 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@301 -- # x722=() 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@301 -- # local -ga x722 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # mlx=() 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # local -ga mlx 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:06.850 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:06.850 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # [[ up == up ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:06.850 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # [[ up == up ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:06.850 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # is_hw=yes 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:27:06.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:06.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:27:06.850 00:27:06.850 --- 10.0.0.2 ping statistics --- 00:27:06.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.850 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:27:06.850 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:06.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:06.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:27:06.851 00:27:06.851 --- 10.0.0.1 ping statistics --- 00:27:06.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:06.851 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:27:06.851 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:06.851 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@426 -- # return 0 00:27:06.851 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:27:06.851 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.851 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:27:06.851 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:27:06.851 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.851 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:27:06.851 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:27:06.851 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:27:06.851 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:27:06.851 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:27:06.851 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:27:06.851 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@725 -- # xtrace_disable 00:27:06.851 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:07.108 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@485 -- # nvmfpid=1300714 00:27:07.108 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:27:07.108 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@486 -- # waitforlisten 1300714 00:27:07.108 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@832 -- # '[' -z 1300714 ']' 00:27:07.108 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:07.108 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local max_retries=100 00:27:07.108 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:07.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:07.108 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@841 -- # xtrace_disable 00:27:07.108 19:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:07.108 [2024-07-24 19:56:24.275157] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:07.108 [2024-07-24 19:56:24.276205] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:27:07.108 [2024-07-24 19:56:24.276280] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:07.108 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.108 [2024-07-24 19:56:24.342965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:07.108 [2024-07-24 19:56:24.460637] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:07.108 [2024-07-24 19:56:24.460692] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:07.108 [2024-07-24 19:56:24.460709] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:07.108 [2024-07-24 19:56:24.460723] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:07.108 [2024-07-24 19:56:24.460734] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:07.108 [2024-07-24 19:56:24.460835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:07.108 [2024-07-24 19:56:24.460933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:07.108 [2024-07-24 19:56:24.461001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:07.108 [2024-07-24 19:56:24.461004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.366 [2024-07-24 19:56:24.558584] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:07.366 [2024-07-24 19:56:24.558860] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:07.366 [2024-07-24 19:56:24.559129] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:07.366 [2024-07-24 19:56:24.559807] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:07.366 [2024-07-24 19:56:24.560072] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@865 -- # return 0 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@731 -- # xtrace_disable 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@562 -- # xtrace_disable 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:07.932 [2024-07-24 19:56:25.233832] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@725 -- # xtrace_disable 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:27:07.932 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:27:07.933 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@562 -- # xtrace_disable 00:27:07.933 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:07.933 Malloc0 00:27:07.933 [2024-07-24 19:56:25.293971] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.933 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:27:07.933 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:27:07.933 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@731 -- # xtrace_disable 00:27:07.933 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1300887 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1300887 /var/tmp/bdevperf.sock 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@832 -- # '[' -z 1300887 ']' 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local max_retries=100 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@536 -- # config=() 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:08.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@536 -- # local subsystem config 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@841 -- # xtrace_disable 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:27:08.191 { 00:27:08.191 "params": { 00:27:08.191 "name": "Nvme$subsystem", 00:27:08.191 "trtype": "$TEST_TRANSPORT", 00:27:08.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:08.191 "adrfam": "ipv4", 00:27:08.191 "trsvcid": "$NVMF_PORT", 00:27:08.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:08.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:08.191 "hdgst": ${hdgst:-false}, 00:27:08.191 "ddgst": ${ddgst:-false} 00:27:08.191 }, 00:27:08.191 "method": "bdev_nvme_attach_controller" 00:27:08.191 } 00:27:08.191 EOF 00:27:08.191 )") 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # cat 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # jq . 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@561 -- # IFS=, 00:27:08.191 19:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:27:08.191 "params": { 00:27:08.191 "name": "Nvme0", 00:27:08.191 "trtype": "tcp", 00:27:08.191 "traddr": "10.0.0.2", 00:27:08.191 "adrfam": "ipv4", 00:27:08.191 "trsvcid": "4420", 00:27:08.191 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:08.191 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:08.191 "hdgst": false, 00:27:08.191 "ddgst": false 00:27:08.191 }, 00:27:08.191 "method": "bdev_nvme_attach_controller" 00:27:08.191 }' 00:27:08.191 [2024-07-24 19:56:25.373400] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:27:08.191 [2024-07-24 19:56:25.373477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300887 ] 00:27:08.191 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.191 [2024-07-24 19:56:25.434986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.191 [2024-07-24 19:56:25.544740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.449 Running I/O for 10 seconds... 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@865 -- # return 0 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@562 -- # xtrace_disable 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@562 -- # xtrace_disable 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=963 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 963 -ge 100 ']' 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@562 -- # xtrace_disable 00:27:09.017 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:09.017 [2024-07-24 19:56:26.375203] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375277] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375294] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375313] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375327] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375339] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375351] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375362] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375374] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375385] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375397] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375419] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375431] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375443] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375455] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375466] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375477] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375489] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375501] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375513] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375524] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375543] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375555] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375571] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375584] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375595] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.017 [2024-07-24 19:56:26.375607] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375618] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375630] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375641] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375653] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375665] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375677] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375688] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375701] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375712] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375724] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375735] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375750] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375763] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375774] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375785] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375797] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375808] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375824] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375836] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375848] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375860] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375871] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375882] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375894] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375905] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375916] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375927] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375939] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375950] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375962] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375973] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375985] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.375997] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.376008] tcp.c:1747:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc05290 is same with the state(6) to be set 00:27:09.018 [2024-07-24 19:56:26.376124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.018 [2024-07-24 19:56:26.376813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.018 [2024-07-24 19:56:26.376826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.376841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.376855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.376870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.376883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.376898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.376911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.376926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.376940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.376955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.376968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.376984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.019 [2024-07-24 19:56:26.377944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.019 [2024-07-24 19:56:26.377960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.020 [2024-07-24 19:56:26.377972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-24 19:56:26.377988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.020 [2024-07-24 19:56:26.378001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-24 19:56:26.378016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.020 [2024-07-24 19:56:26.378029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-24 19:56:26.378044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.020 [2024-07-24 19:56:26.378057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.020 [2024-07-24 19:56:26.378072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28015a0 is same with the state(6) to be set 00:27:09.020 [2024-07-24 19:56:26.378142] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x28015a0 was disconnected and freed. reset controller. 00:27:09.020 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:27:09.020 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:27:09.020 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@562 -- # xtrace_disable 00:27:09.020 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:09.020 [2024-07-24 19:56:26.379320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:09.020 task offset: 0 on job bdev=Nvme0n1 fails 00:27:09.020 00:27:09.020 Latency(us) 00:27:09.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.020 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:09.020 Job: Nvme0n1 ended in about 0.66 seconds with error 00:27:09.020 Verification LBA range: start 0x0 length 0x400 00:27:09.020 Nvme0n1 : 0.66 1550.56 96.91 96.91 0.00 38067.22 5485.61 33399.09 00:27:09.020 =================================================================================================================== 00:27:09.020 Total : 1550.56 96.91 96.91 0.00 38067.22 5485.61 33399.09 00:27:09.020 [2024-07-24 19:56:26.381413] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:09.020 [2024-07-24 19:56:26.381445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f0790 (9): Bad file descriptor 00:27:09.020 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:27:09.020 19:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:27:09.020 [2024-07-24 19:56:26.391948] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:10.393 19:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1300887 00:27:10.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1300887) - No such process 00:27:10.393 19:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:27:10.393 19:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:27:10.393 19:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:10.393 19:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:27:10.393 19:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@536 -- # config=() 00:27:10.393 19:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@536 -- # local subsystem config 00:27:10.393 19:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:27:10.393 19:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:27:10.393 { 00:27:10.393 "params": { 00:27:10.393 "name": "Nvme$subsystem", 00:27:10.393 "trtype": "$TEST_TRANSPORT", 00:27:10.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:10.393 "adrfam": "ipv4", 00:27:10.393 "trsvcid": "$NVMF_PORT", 00:27:10.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:10.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:10.393 "hdgst": ${hdgst:-false}, 00:27:10.393 "ddgst": ${ddgst:-false} 00:27:10.393 }, 00:27:10.393 "method": "bdev_nvme_attach_controller" 00:27:10.393 } 00:27:10.393 EOF 00:27:10.393 )") 00:27:10.393 19:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@558 -- # cat 00:27:10.393 19:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # jq . 00:27:10.393 19:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@561 -- # IFS=, 00:27:10.393 19:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:27:10.393 "params": { 00:27:10.393 "name": "Nvme0", 00:27:10.393 "trtype": "tcp", 00:27:10.393 "traddr": "10.0.0.2", 00:27:10.393 "adrfam": "ipv4", 00:27:10.393 "trsvcid": "4420", 00:27:10.393 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:10.393 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:10.393 "hdgst": false, 00:27:10.393 "ddgst": false 00:27:10.393 }, 00:27:10.393 "method": "bdev_nvme_attach_controller" 00:27:10.393 }' 00:27:10.393 [2024-07-24 19:56:27.434655] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:27:10.393 [2024-07-24 19:56:27.434729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301160 ] 00:27:10.393 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.393 [2024-07-24 19:56:27.493952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.394 [2024-07-24 19:56:27.603924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:10.651 Running I/O for 1 seconds... 00:27:11.583 00:27:11.583 Latency(us) 00:27:11.583 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.583 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:11.583 Verification LBA range: start 0x0 length 0x400 00:27:11.583 Nvme0n1 : 1.01 2046.84 127.93 0.00 0.00 30599.16 3907.89 32816.55 00:27:11.583 =================================================================================================================== 00:27:11.583 Total : 2046.84 127.93 0.00 0.00 30599.16 3907.89 32816.55 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # nvmfcleanup 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:11.842 rmmod nvme_tcp 00:27:11.842 rmmod nvme_fabrics 00:27:11.842 rmmod nvme_keyring 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # '[' -n 1300714 ']' 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@494 -- # killprocess 1300714 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' -z 1300714 ']' 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@955 -- # kill -0 1300714 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # uname 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1300714 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1300714' 00:27:11.842 killing process with pid 1300714 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # kill 1300714 00:27:11.842 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@975 -- # wait 1300714 00:27:12.099 [2024-07-24 19:56:29.460632] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:27:12.358 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:27:12.358 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:27:12.358 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:27:12.358 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:12.358 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@282 -- # remove_spdk_ns 00:27:12.358 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.358 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.358 19:56:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.256 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:27:14.256 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:14.256 00:27:14.256 real 0m9.449s 00:27:14.256 user 0m18.749s 00:27:14.256 sys 0m3.819s 00:27:14.256 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # xtrace_disable 00:27:14.256 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:27:14.256 ************************************ 00:27:14.256 END TEST nvmf_host_management 00:27:14.256 ************************************ 00:27:14.256 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:14.256 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:27:14.256 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1108 -- # xtrace_disable 00:27:14.256 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:14.256 ************************************ 00:27:14.256 START TEST nvmf_lvol 00:27:14.256 ************************************ 00:27:14.256 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:27:14.256 * Looking for test storage... 00:27:14.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:27:14.515 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.516 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@452 -- # prepare_net_devs 00:27:14.516 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@414 -- # local -g is_hw=no 00:27:14.516 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # remove_spdk_ns 00:27:14.516 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.516 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.516 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.516 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:27:14.516 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:27:14.516 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@289 -- # xtrace_disable 00:27:14.516 19:56:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@295 -- # pci_devs=() 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@295 -- # local -a pci_devs 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@296 -- # pci_net_devs=() 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # pci_drivers=() 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # local -A pci_drivers 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # net_devs=() 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # local -ga net_devs 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # e810=() 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # local -ga e810 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@301 -- # x722=() 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@301 -- # local -ga x722 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # mlx=() 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # local -ga mlx 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:27:16.416 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:16.417 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:16.417 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@394 -- # [[ up == up ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:16.417 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@394 -- # [[ up == up ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:16.417 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # is_hw=yes 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:27:16.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:16.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:27:16.417 00:27:16.417 --- 10.0.0.2 ping statistics --- 00:27:16.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.417 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:16.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:16.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:27:16.417 00:27:16.417 --- 10.0.0.1 ping statistics --- 00:27:16.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:16.417 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@426 -- # return 0 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@725 -- # xtrace_disable 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@485 -- # nvmfpid=1303231 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@486 -- # waitforlisten 1303231 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@832 -- # '[' -z 1303231 ']' 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local max_retries=100 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@841 -- # xtrace_disable 00:27:16.417 19:56:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:16.417 [2024-07-24 19:56:33.751801] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:16.417 [2024-07-24 19:56:33.752860] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:27:16.417 [2024-07-24 19:56:33.752924] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.417 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.675 [2024-07-24 19:56:33.820235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:16.675 [2024-07-24 19:56:33.937706] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.675 [2024-07-24 19:56:33.937764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.675 [2024-07-24 19:56:33.937780] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.675 [2024-07-24 19:56:33.937794] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.675 [2024-07-24 19:56:33.937806] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.675 [2024-07-24 19:56:33.937885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.675 [2024-07-24 19:56:33.937935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.675 [2024-07-24 19:56:33.937952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.675 [2024-07-24 19:56:34.031524] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:16.675 [2024-07-24 19:56:34.031771] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:16.675 [2024-07-24 19:56:34.047306] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:16.675 [2024-07-24 19:56:34.047566] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:16.933 19:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:27:16.933 19:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@865 -- # return 0 00:27:16.933 19:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:27:16.933 19:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@731 -- # xtrace_disable 00:27:16.933 19:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:16.933 19:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:16.933 19:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:17.191 [2024-07-24 19:56:34.362679] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.191 19:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:17.448 19:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:27:17.448 19:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:17.706 19:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:27:17.706 19:56:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:27:17.963 19:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:27:18.220 19:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a6c9489a-dfe2-49ab-926f-b65c9a393954 00:27:18.220 19:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a6c9489a-dfe2-49ab-926f-b65c9a393954 lvol 20 00:27:18.478 19:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8223dc8f-fad2-4c79-a3af-56470739f160 00:27:18.478 19:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:18.736 19:56:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8223dc8f-fad2-4c79-a3af-56470739f160 00:27:18.995 19:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:19.253 [2024-07-24 19:56:36.470836] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:19.253 19:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:19.510 19:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1303652 00:27:19.510 19:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:27:19.510 19:56:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:27:19.510 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.480 19:56:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8223dc8f-fad2-4c79-a3af-56470739f160 MY_SNAPSHOT 00:27:20.738 19:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c29582f8-136b-4f94-b061-7232dbfeff8b 00:27:20.738 19:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8223dc8f-fad2-4c79-a3af-56470739f160 30 00:27:20.996 19:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c29582f8-136b-4f94-b061-7232dbfeff8b MY_CLONE 00:27:21.253 19:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=d0fb6ecd-a6e8-49f7-aa12-25f310b2af11 00:27:21.253 19:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate d0fb6ecd-a6e8-49f7-aa12-25f310b2af11 00:27:22.186 19:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1303652 00:27:30.286 Initializing NVMe Controllers 00:27:30.286 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:30.286 Controller IO queue size 128, less than required. 00:27:30.286 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:30.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:27:30.286 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:27:30.286 Initialization complete. Launching workers. 00:27:30.286 ======================================================== 00:27:30.286 Latency(us) 00:27:30.286 Device Information : IOPS MiB/s Average min max 00:27:30.286 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9911.70 38.72 12921.58 1475.00 57187.58 00:27:30.286 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10654.70 41.62 12015.10 1982.34 53342.63 00:27:30.286 ======================================================== 00:27:30.286 Total : 20566.40 80.34 12451.96 1475.00 57187.58 00:27:30.286 00:27:30.286 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:30.286 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8223dc8f-fad2-4c79-a3af-56470739f160 00:27:30.286 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a6c9489a-dfe2-49ab-926f-b65c9a393954 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # nvmfcleanup 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:30.544 rmmod nvme_tcp 00:27:30.544 rmmod nvme_fabrics 00:27:30.544 rmmod nvme_keyring 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # '[' -n 1303231 ']' 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@494 -- # killprocess 1303231 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' -z 1303231 ']' 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@955 -- # kill -0 1303231 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # uname 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1303231 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1303231' 00:27:30.544 killing process with pid 1303231 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # kill 1303231 00:27:30.544 19:56:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@975 -- # wait 1303231 00:27:31.109 19:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:27:31.109 19:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:27:31.109 19:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:27:31.109 19:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:31.109 19:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@282 -- # remove_spdk_ns 00:27:31.109 19:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:31.109 19:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:31.109 19:56:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.007 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:27:33.007 00:27:33.007 real 0m18.728s 00:27:33.007 user 0m53.968s 00:27:33.007 sys 0m8.537s 00:27:33.007 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # xtrace_disable 00:27:33.007 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:27:33.007 ************************************ 00:27:33.007 END TEST nvmf_lvol 00:27:33.007 ************************************ 00:27:33.007 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:33.007 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:27:33.007 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1108 -- # xtrace_disable 00:27:33.007 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:33.007 ************************************ 00:27:33.007 START TEST nvmf_lvs_grow 00:27:33.007 ************************************ 00:27:33.007 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:27:33.266 * Looking for test storage... 00:27:33.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@452 -- # prepare_net_devs 00:27:33.266 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@414 -- # local -g is_hw=no 00:27:33.267 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # remove_spdk_ns 00:27:33.267 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.267 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.267 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.267 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:27:33.267 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:27:33.267 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@289 -- # xtrace_disable 00:27:33.267 19:56:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@295 -- # pci_devs=() 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -a pci_devs 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@296 -- # pci_net_devs=() 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # pci_drivers=() 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -A pci_drivers 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # net_devs=() 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # local -ga net_devs 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # e810=() 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # local -ga e810 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@301 -- # x722=() 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@301 -- # local -ga x722 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # mlx=() 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # local -ga mlx 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:35.167 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:35.168 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:35.168 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@394 -- # [[ up == up ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:35.168 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@394 -- # [[ up == up ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:35.168 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # is_hw=yes 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:27:35.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:35.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:27:35.168 00:27:35.168 --- 10.0.0.2 ping statistics --- 00:27:35.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.168 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:35.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:35.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:27:35.168 00:27:35.168 --- 10.0.0.1 ping statistics --- 00:27:35.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.168 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@426 -- # return 0 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@725 -- # xtrace_disable 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:35.168 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@485 -- # nvmfpid=1306900 00:27:35.169 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:27:35.169 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@486 -- # waitforlisten 1306900 00:27:35.169 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # '[' -z 1306900 ']' 00:27:35.169 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.169 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local max_retries=100 00:27:35.169 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.169 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@841 -- # xtrace_disable 00:27:35.169 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:35.169 [2024-07-24 19:56:52.487976] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:35.169 [2024-07-24 19:56:52.489067] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:27:35.169 [2024-07-24 19:56:52.489136] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.169 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.427 [2024-07-24 19:56:52.557819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.427 [2024-07-24 19:56:52.677298] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.427 [2024-07-24 19:56:52.677349] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.427 [2024-07-24 19:56:52.677378] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.427 [2024-07-24 19:56:52.677390] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.427 [2024-07-24 19:56:52.677400] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.427 [2024-07-24 19:56:52.677429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.427 [2024-07-24 19:56:52.777571] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:35.427 [2024-07-24 19:56:52.777933] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:35.427 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:27:35.427 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@865 -- # return 0 00:27:35.427 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:27:35.427 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@731 -- # xtrace_disable 00:27:35.427 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:35.685 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.685 19:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:35.943 [2024-07-24 19:56:53.098025] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.943 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:27:35.943 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:27:35.943 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1108 -- # xtrace_disable 00:27:35.944 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:35.944 ************************************ 00:27:35.944 START TEST lvs_grow_clean 00:27:35.944 ************************************ 00:27:35.944 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # lvs_grow 00:27:35.944 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:35.944 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:35.944 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:35.944 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:35.944 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:35.944 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:35.944 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:35.944 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:35.944 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:36.201 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:36.201 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:36.459 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c3090ad5-e508-4543-93c7-fcdd200b0cd0 00:27:36.459 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3090ad5-e508-4543-93c7-fcdd200b0cd0 00:27:36.459 19:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:36.717 19:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:36.717 19:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:36.717 19:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c3090ad5-e508-4543-93c7-fcdd200b0cd0 lvol 150 00:27:36.975 19:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a529f0d7-ee72-4205-8e52-bb9e9343a9bb 00:27:36.975 19:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:36.975 19:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:37.233 [2024-07-24 19:56:54.521942] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:37.233 [2024-07-24 19:56:54.522037] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:37.233 true 00:27:37.233 19:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3090ad5-e508-4543-93c7-fcdd200b0cd0 00:27:37.233 19:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:37.490 19:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:37.490 19:56:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:37.748 19:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a529f0d7-ee72-4205-8e52-bb9e9343a9bb 00:27:38.006 19:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:38.265 [2024-07-24 19:56:55.506222] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.265 19:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:38.524 19:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1307333 00:27:38.524 19:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:38.524 19:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:38.524 19:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1307333 /var/tmp/bdevperf.sock 00:27:38.524 19:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # '[' -z 1307333 ']' 00:27:38.524 19:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:38.524 19:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local max_retries=100 00:27:38.524 19:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:38.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:38.524 19:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@841 -- # xtrace_disable 00:27:38.524 19:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:38.524 [2024-07-24 19:56:55.821085] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:27:38.524 [2024-07-24 19:56:55.821184] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307333 ] 00:27:38.524 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.524 [2024-07-24 19:56:55.880751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.783 [2024-07-24 19:56:55.995178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:38.783 19:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:27:38.783 19:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@865 -- # return 0 00:27:38.783 19:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:39.349 Nvme0n1 00:27:39.349 19:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:39.607 [ 00:27:39.607 { 00:27:39.607 "name": "Nvme0n1", 00:27:39.607 "aliases": [ 00:27:39.607 "a529f0d7-ee72-4205-8e52-bb9e9343a9bb" 00:27:39.607 ], 00:27:39.607 "product_name": "NVMe disk", 00:27:39.607 "block_size": 4096, 00:27:39.607 "num_blocks": 38912, 00:27:39.607 "uuid": "a529f0d7-ee72-4205-8e52-bb9e9343a9bb", 00:27:39.607 "assigned_rate_limits": { 00:27:39.607 "rw_ios_per_sec": 0, 00:27:39.607 "rw_mbytes_per_sec": 0, 00:27:39.607 "r_mbytes_per_sec": 0, 00:27:39.607 "w_mbytes_per_sec": 0 00:27:39.607 }, 00:27:39.607 "claimed": false, 00:27:39.607 "zoned": false, 00:27:39.607 "supported_io_types": { 00:27:39.607 "read": true, 00:27:39.607 "write": true, 00:27:39.607 "unmap": true, 00:27:39.607 "flush": true, 00:27:39.607 "reset": true, 00:27:39.607 "nvme_admin": true, 00:27:39.607 "nvme_io": true, 00:27:39.607 "nvme_io_md": false, 00:27:39.607 "write_zeroes": true, 00:27:39.607 "zcopy": false, 00:27:39.607 "get_zone_info": false, 00:27:39.607 "zone_management": false, 00:27:39.607 "zone_append": false, 00:27:39.607 "compare": true, 00:27:39.607 "compare_and_write": true, 00:27:39.607 "abort": true, 00:27:39.607 "seek_hole": false, 00:27:39.607 "seek_data": false, 00:27:39.607 "copy": true, 00:27:39.607 "nvme_iov_md": false 00:27:39.607 }, 00:27:39.607 "memory_domains": [ 00:27:39.607 { 00:27:39.607 "dma_device_id": "system", 00:27:39.607 "dma_device_type": 1 00:27:39.607 } 00:27:39.607 ], 00:27:39.607 "driver_specific": { 00:27:39.607 "nvme": [ 00:27:39.607 { 00:27:39.607 "trid": { 00:27:39.607 "trtype": "TCP", 00:27:39.607 "adrfam": "IPv4", 00:27:39.607 "traddr": "10.0.0.2", 00:27:39.607 "trsvcid": "4420", 00:27:39.607 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:39.607 }, 00:27:39.607 "ctrlr_data": { 00:27:39.607 "cntlid": 1, 00:27:39.607 "vendor_id": "0x8086", 00:27:39.607 "model_number": "SPDK bdev Controller", 00:27:39.607 "serial_number": "SPDK0", 00:27:39.607 "firmware_revision": "24.09", 00:27:39.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:39.607 "oacs": { 00:27:39.607 "security": 0, 00:27:39.607 "format": 0, 00:27:39.607 "firmware": 0, 00:27:39.607 "ns_manage": 0 00:27:39.607 }, 00:27:39.607 "multi_ctrlr": true, 00:27:39.607 "ana_reporting": false 00:27:39.607 }, 00:27:39.607 "vs": { 00:27:39.607 "nvme_version": "1.3" 00:27:39.607 }, 00:27:39.607 "ns_data": { 00:27:39.607 "id": 1, 00:27:39.607 "can_share": true 00:27:39.607 } 00:27:39.607 } 00:27:39.607 ], 00:27:39.607 "mp_policy": "active_passive" 00:27:39.607 } 00:27:39.607 } 00:27:39.607 ] 00:27:39.607 19:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1307466 00:27:39.607 19:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:39.607 19:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:39.607 Running I/O for 10 seconds... 00:27:40.981 Latency(us) 00:27:40.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.981 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:40.981 Nvme0n1 : 1.00 13970.00 54.57 0.00 0.00 0.00 0.00 0.00 00:27:40.981 =================================================================================================================== 00:27:40.981 Total : 13970.00 54.57 0.00 0.00 0.00 0.00 0.00 00:27:40.981 00:27:41.547 19:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c3090ad5-e508-4543-93c7-fcdd200b0cd0 00:27:41.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:41.806 Nvme0n1 : 2.00 14100.00 55.08 0.00 0.00 0.00 0.00 0.00 00:27:41.806 =================================================================================================================== 00:27:41.806 Total : 14100.00 55.08 0.00 0.00 0.00 0.00 0.00 00:27:41.806 00:27:41.806 true 00:27:41.806 19:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3090ad5-e508-4543-93c7-fcdd200b0cd0 00:27:41.806 19:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:42.064 19:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:42.064 19:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:42.064 19:56:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1307466 00:27:42.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:42.655 Nvme0n1 : 3.00 14249.67 55.66 0.00 0.00 0.00 0.00 0.00 00:27:42.655 =================================================================================================================== 00:27:42.655 Total : 14249.67 55.66 0.00 0.00 0.00 0.00 0.00 00:27:42.655 00:27:43.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:43.594 Nvme0n1 : 4.00 14306.75 55.89 0.00 0.00 0.00 0.00 0.00 00:27:43.594 =================================================================================================================== 00:27:43.594 Total : 14306.75 55.89 0.00 0.00 0.00 0.00 0.00 00:27:43.594 00:27:44.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:44.971 Nvme0n1 : 5.00 14341.00 56.02 0.00 0.00 0.00 0.00 0.00 00:27:44.971 =================================================================================================================== 00:27:44.971 Total : 14341.00 56.02 0.00 0.00 0.00 0.00 0.00 00:27:44.971 00:27:45.905 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:45.905 Nvme0n1 : 6.00 14385.00 56.19 0.00 0.00 0.00 0.00 0.00 00:27:45.905 =================================================================================================================== 00:27:45.905 Total : 14385.00 56.19 0.00 0.00 0.00 0.00 0.00 00:27:45.905 00:27:46.840 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:46.840 Nvme0n1 : 7.00 14416.43 56.31 0.00 0.00 0.00 0.00 0.00 00:27:46.840 =================================================================================================================== 00:27:46.840 Total : 14416.43 56.31 0.00 0.00 0.00 0.00 0.00 00:27:46.840 00:27:47.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:47.774 Nvme0n1 : 8.00 14441.12 56.41 0.00 0.00 0.00 0.00 0.00 00:27:47.774 =================================================================================================================== 00:27:47.774 Total : 14441.12 56.41 0.00 0.00 0.00 0.00 0.00 00:27:47.774 00:27:48.708 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:48.708 Nvme0n1 : 9.00 14459.33 56.48 0.00 0.00 0.00 0.00 0.00 00:27:48.708 =================================================================================================================== 00:27:48.708 Total : 14459.33 56.48 0.00 0.00 0.00 0.00 0.00 00:27:48.708 00:27:49.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:49.642 Nvme0n1 : 10.00 14480.30 56.56 0.00 0.00 0.00 0.00 0.00 00:27:49.642 =================================================================================================================== 00:27:49.642 Total : 14480.30 56.56 0.00 0.00 0.00 0.00 0.00 00:27:49.642 00:27:49.642 00:27:49.642 Latency(us) 00:27:49.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.642 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:49.642 Nvme0n1 : 10.00 14482.21 56.57 0.00 0.00 8832.89 5315.70 19223.89 00:27:49.642 =================================================================================================================== 00:27:49.642 Total : 14482.21 56.57 0.00 0.00 8832.89 5315.70 19223.89 00:27:49.642 0 00:27:49.642 19:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1307333 00:27:49.642 19:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' -z 1307333 ']' 00:27:49.642 19:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # kill -0 1307333 00:27:49.642 19:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # uname 00:27:49.642 19:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:27:49.642 19:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1307333 00:27:49.899 19:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:27:49.899 19:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:27:49.899 19:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1307333' 00:27:49.899 killing process with pid 1307333 00:27:49.899 19:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # kill 1307333 00:27:49.899 Received shutdown signal, test time was about 10.000000 seconds 00:27:49.899 00:27:49.899 Latency(us) 00:27:49.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.899 =================================================================================================================== 00:27:49.899 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:49.899 19:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@975 -- # wait 1307333 00:27:50.157 19:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:50.415 19:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:50.672 19:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3090ad5-e508-4543-93c7-fcdd200b0cd0 00:27:50.672 19:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:27:50.930 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:27:50.930 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:27:50.930 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:51.189 [2024-07-24 19:57:08.321946] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:27:51.189 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3090ad5-e508-4543-93c7-fcdd200b0cd0 00:27:51.189 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # local es=0 00:27:51.189 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3090ad5-e508-4543-93c7-fcdd200b0cd0 00:27:51.189 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@639 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:51.189 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:27:51.189 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:51.189 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:27:51.189 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:51.189 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:27:51.189 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:51.189 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:27:51.189 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3090ad5-e508-4543-93c7-fcdd200b0cd0 00:27:51.445 request: 00:27:51.445 { 00:27:51.445 "uuid": "c3090ad5-e508-4543-93c7-fcdd200b0cd0", 00:27:51.445 "method": "bdev_lvol_get_lvstores", 00:27:51.445 "req_id": 1 00:27:51.445 } 00:27:51.445 Got JSON-RPC error response 00:27:51.445 response: 00:27:51.445 { 00:27:51.445 "code": -19, 00:27:51.445 "message": "No such device" 00:27:51.445 } 00:27:51.445 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # es=1 00:27:51.445 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:27:51.445 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:27:51.446 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:27:51.446 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:51.703 aio_bdev 00:27:51.703 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a529f0d7-ee72-4205-8e52-bb9e9343a9bb 00:27:51.703 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_name=a529f0d7-ee72-4205-8e52-bb9e9343a9bb 00:27:51.703 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:27:51.703 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local i 00:27:51.703 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:27:51.703 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:27:51.703 19:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:27:51.961 19:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a529f0d7-ee72-4205-8e52-bb9e9343a9bb -t 2000 00:27:52.219 [ 00:27:52.219 { 00:27:52.219 "name": "a529f0d7-ee72-4205-8e52-bb9e9343a9bb", 00:27:52.219 "aliases": [ 00:27:52.219 "lvs/lvol" 00:27:52.219 ], 00:27:52.219 "product_name": "Logical Volume", 00:27:52.219 "block_size": 4096, 00:27:52.219 "num_blocks": 38912, 00:27:52.219 "uuid": "a529f0d7-ee72-4205-8e52-bb9e9343a9bb", 00:27:52.219 "assigned_rate_limits": { 00:27:52.219 "rw_ios_per_sec": 0, 00:27:52.219 "rw_mbytes_per_sec": 0, 00:27:52.219 "r_mbytes_per_sec": 0, 00:27:52.219 "w_mbytes_per_sec": 0 00:27:52.219 }, 00:27:52.219 "claimed": false, 00:27:52.219 "zoned": false, 00:27:52.219 "supported_io_types": { 00:27:52.219 "read": true, 00:27:52.219 "write": true, 00:27:52.219 "unmap": true, 00:27:52.219 "flush": false, 00:27:52.219 "reset": true, 00:27:52.219 "nvme_admin": false, 00:27:52.219 "nvme_io": false, 00:27:52.219 "nvme_io_md": false, 00:27:52.219 "write_zeroes": true, 00:27:52.219 "zcopy": false, 00:27:52.219 "get_zone_info": false, 00:27:52.219 "zone_management": false, 00:27:52.219 "zone_append": false, 00:27:52.219 "compare": false, 00:27:52.219 "compare_and_write": false, 00:27:52.219 "abort": false, 00:27:52.219 "seek_hole": true, 00:27:52.219 "seek_data": true, 00:27:52.220 "copy": false, 00:27:52.220 "nvme_iov_md": false 00:27:52.220 }, 00:27:52.220 "driver_specific": { 00:27:52.220 "lvol": { 00:27:52.220 "lvol_store_uuid": "c3090ad5-e508-4543-93c7-fcdd200b0cd0", 00:27:52.220 "base_bdev": "aio_bdev", 00:27:52.220 "thin_provision": false, 00:27:52.220 "num_allocated_clusters": 38, 00:27:52.220 "snapshot": false, 00:27:52.220 "clone": false, 00:27:52.220 "esnap_clone": false 00:27:52.220 } 00:27:52.220 } 00:27:52.220 } 00:27:52.220 ] 00:27:52.220 19:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # return 0 00:27:52.220 19:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3090ad5-e508-4543-93c7-fcdd200b0cd0 00:27:52.220 19:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:27:52.478 19:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:27:52.478 19:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c3090ad5-e508-4543-93c7-fcdd200b0cd0 00:27:52.478 19:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:27:52.737 19:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:27:52.737 19:57:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a529f0d7-ee72-4205-8e52-bb9e9343a9bb 00:27:52.737 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c3090ad5-e508-4543-93c7-fcdd200b0cd0 00:27:53.306 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:27:53.306 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:53.306 00:27:53.306 real 0m17.508s 00:27:53.306 user 0m16.886s 00:27:53.306 sys 0m1.924s 00:27:53.306 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # xtrace_disable 00:27:53.306 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:27:53.306 ************************************ 00:27:53.306 END TEST lvs_grow_clean 00:27:53.306 ************************************ 00:27:53.306 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:27:53.306 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:27:53.306 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1108 -- # xtrace_disable 00:27:53.306 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:27:53.566 ************************************ 00:27:53.566 START TEST lvs_grow_dirty 00:27:53.566 ************************************ 00:27:53.566 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # lvs_grow dirty 00:27:53.566 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:27:53.566 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:27:53.566 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:27:53.566 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:27:53.566 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:27:53.566 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:27:53.566 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:53.566 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:53.566 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:27:53.825 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:27:53.825 19:57:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:27:54.084 19:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a58e0571-db97-4219-9f8f-41787cd9e54e 00:27:54.084 19:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58e0571-db97-4219-9f8f-41787cd9e54e 00:27:54.084 19:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:27:54.343 19:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:27:54.343 19:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:27:54.343 19:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a58e0571-db97-4219-9f8f-41787cd9e54e lvol 150 00:27:54.602 19:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4f53b2cc-7304-4ea7-814e-066d50ed51f0 00:27:54.602 19:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:27:54.602 19:57:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:27:54.863 [2024-07-24 19:57:11.985898] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:27:54.863 [2024-07-24 19:57:11.986013] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:27:54.863 true 00:27:54.863 19:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58e0571-db97-4219-9f8f-41787cd9e54e 00:27:54.863 19:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:27:55.123 19:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:27:55.123 19:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:55.123 19:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4f53b2cc-7304-4ea7-814e-066d50ed51f0 00:27:55.383 19:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:55.643 [2024-07-24 19:57:12.962159] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.643 19:57:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:55.901 19:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1309989 00:27:55.901 19:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:27:55.901 19:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:55.901 19:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1309989 /var/tmp/bdevperf.sock 00:27:55.901 19:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # '[' -z 1309989 ']' 00:27:55.901 19:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:55.901 19:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local max_retries=100 00:27:55.901 19:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:55.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:55.901 19:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@841 -- # xtrace_disable 00:27:55.901 19:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:27:55.901 [2024-07-24 19:57:13.264180] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:27:55.901 [2024-07-24 19:57:13.264280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309989 ] 00:27:56.158 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.158 [2024-07-24 19:57:13.323494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.158 [2024-07-24 19:57:13.432090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:56.158 19:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:27:56.158 19:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@865 -- # return 0 00:27:56.158 19:57:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:27:56.721 Nvme0n1 00:27:56.721 19:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:27:56.980 [ 00:27:56.980 { 00:27:56.980 "name": "Nvme0n1", 00:27:56.980 "aliases": [ 00:27:56.980 "4f53b2cc-7304-4ea7-814e-066d50ed51f0" 00:27:56.980 ], 00:27:56.980 "product_name": "NVMe disk", 00:27:56.980 "block_size": 4096, 00:27:56.980 "num_blocks": 38912, 00:27:56.980 "uuid": "4f53b2cc-7304-4ea7-814e-066d50ed51f0", 00:27:56.980 "assigned_rate_limits": { 00:27:56.980 "rw_ios_per_sec": 0, 00:27:56.980 "rw_mbytes_per_sec": 0, 00:27:56.980 "r_mbytes_per_sec": 0, 00:27:56.980 "w_mbytes_per_sec": 0 00:27:56.980 }, 00:27:56.980 "claimed": false, 00:27:56.980 "zoned": false, 00:27:56.980 "supported_io_types": { 00:27:56.980 "read": true, 00:27:56.980 "write": true, 00:27:56.980 "unmap": true, 00:27:56.980 "flush": true, 00:27:56.980 "reset": true, 00:27:56.980 "nvme_admin": true, 00:27:56.980 "nvme_io": true, 00:27:56.980 "nvme_io_md": false, 00:27:56.980 "write_zeroes": true, 00:27:56.980 "zcopy": false, 00:27:56.980 "get_zone_info": false, 00:27:56.980 "zone_management": false, 00:27:56.980 "zone_append": false, 00:27:56.980 "compare": true, 00:27:56.980 "compare_and_write": true, 00:27:56.980 "abort": true, 00:27:56.980 "seek_hole": false, 00:27:56.980 "seek_data": false, 00:27:56.980 "copy": true, 00:27:56.980 "nvme_iov_md": false 00:27:56.980 }, 00:27:56.980 "memory_domains": [ 00:27:56.980 { 00:27:56.980 "dma_device_id": "system", 00:27:56.980 "dma_device_type": 1 00:27:56.980 } 00:27:56.980 ], 00:27:56.980 "driver_specific": { 00:27:56.980 "nvme": [ 00:27:56.980 { 00:27:56.980 "trid": { 00:27:56.980 "trtype": "TCP", 00:27:56.980 "adrfam": "IPv4", 00:27:56.980 "traddr": "10.0.0.2", 00:27:56.980 "trsvcid": "4420", 00:27:56.980 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:56.980 }, 00:27:56.980 "ctrlr_data": { 00:27:56.980 "cntlid": 1, 00:27:56.980 "vendor_id": "0x8086", 00:27:56.980 "model_number": "SPDK bdev Controller", 00:27:56.980 "serial_number": "SPDK0", 00:27:56.980 "firmware_revision": "24.09", 00:27:56.980 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:56.980 "oacs": { 00:27:56.980 "security": 0, 00:27:56.980 "format": 0, 00:27:56.980 "firmware": 0, 00:27:56.980 "ns_manage": 0 00:27:56.980 }, 00:27:56.980 "multi_ctrlr": true, 00:27:56.980 "ana_reporting": false 00:27:56.980 }, 00:27:56.980 "vs": { 00:27:56.980 "nvme_version": "1.3" 00:27:56.980 }, 00:27:56.980 "ns_data": { 00:27:56.980 "id": 1, 00:27:56.980 "can_share": true 00:27:56.980 } 00:27:56.980 } 00:27:56.980 ], 00:27:56.980 "mp_policy": "active_passive" 00:27:56.980 } 00:27:56.980 } 00:27:56.980 ] 00:27:56.980 19:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1310117 00:27:56.980 19:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:27:56.980 19:57:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:56.980 Running I/O for 10 seconds... 00:27:58.356 Latency(us) 00:27:58.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:58.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:58.356 Nvme0n1 : 1.00 14097.00 55.07 0.00 0.00 0.00 0.00 0.00 00:27:58.356 =================================================================================================================== 00:27:58.356 Total : 14097.00 55.07 0.00 0.00 0.00 0.00 0.00 00:27:58.356 00:27:58.949 19:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a58e0571-db97-4219-9f8f-41787cd9e54e 00:27:59.206 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:59.206 Nvme0n1 : 2.00 14224.00 55.56 0.00 0.00 0.00 0.00 0.00 00:27:59.206 =================================================================================================================== 00:27:59.206 Total : 14224.00 55.56 0.00 0.00 0.00 0.00 0.00 00:27:59.206 00:27:59.206 true 00:27:59.206 19:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58e0571-db97-4219-9f8f-41787cd9e54e 00:27:59.206 19:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:27:59.465 19:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:27:59.465 19:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:27:59.465 19:57:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1310117 00:28:00.034 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:00.034 Nvme0n1 : 3.00 14266.33 55.73 0.00 0.00 0.00 0.00 0.00 00:28:00.034 =================================================================================================================== 00:28:00.034 Total : 14266.33 55.73 0.00 0.00 0.00 0.00 0.00 00:28:00.034 00:28:01.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:01.413 Nvme0n1 : 4.00 14291.75 55.83 0.00 0.00 0.00 0.00 0.00 00:28:01.413 =================================================================================================================== 00:28:01.413 Total : 14291.75 55.83 0.00 0.00 0.00 0.00 0.00 00:28:01.413 00:28:02.349 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:02.349 Nvme0n1 : 5.00 14354.40 56.07 0.00 0.00 0.00 0.00 0.00 00:28:02.349 =================================================================================================================== 00:28:02.349 Total : 14354.40 56.07 0.00 0.00 0.00 0.00 0.00 00:28:02.349 00:28:03.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:03.284 Nvme0n1 : 6.00 14311.50 55.90 0.00 0.00 0.00 0.00 0.00 00:28:03.284 =================================================================================================================== 00:28:03.284 Total : 14311.50 55.90 0.00 0.00 0.00 0.00 0.00 00:28:03.284 00:28:04.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:04.220 Nvme0n1 : 7.00 14353.43 56.07 0.00 0.00 0.00 0.00 0.00 00:28:04.220 =================================================================================================================== 00:28:04.220 Total : 14353.43 56.07 0.00 0.00 0.00 0.00 0.00 00:28:04.220 00:28:05.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:05.155 Nvme0n1 : 8.00 14409.25 56.29 0.00 0.00 0.00 0.00 0.00 00:28:05.155 =================================================================================================================== 00:28:05.155 Total : 14409.25 56.29 0.00 0.00 0.00 0.00 0.00 00:28:05.155 00:28:06.097 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:06.097 Nvme0n1 : 9.00 14445.11 56.43 0.00 0.00 0.00 0.00 0.00 00:28:06.097 =================================================================================================================== 00:28:06.097 Total : 14445.11 56.43 0.00 0.00 0.00 0.00 0.00 00:28:06.097 00:28:07.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:07.032 Nvme0n1 : 10.00 14461.10 56.49 0.00 0.00 0.00 0.00 0.00 00:28:07.032 =================================================================================================================== 00:28:07.032 Total : 14461.10 56.49 0.00 0.00 0.00 0.00 0.00 00:28:07.032 00:28:07.032 00:28:07.032 Latency(us) 00:28:07.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:07.032 Nvme0n1 : 10.01 14461.83 56.49 0.00 0.00 8845.63 4660.34 21068.61 00:28:07.032 =================================================================================================================== 00:28:07.032 Total : 14461.83 56.49 0.00 0.00 8845.63 4660.34 21068.61 00:28:07.032 0 00:28:07.032 19:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1309989 00:28:07.032 19:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' -z 1309989 ']' 00:28:07.032 19:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # kill -0 1309989 00:28:07.032 19:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # uname 00:28:07.032 19:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:28:07.032 19:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1309989 00:28:07.291 19:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:28:07.291 19:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:28:07.291 19:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1309989' 00:28:07.291 killing process with pid 1309989 00:28:07.291 19:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # kill 1309989 00:28:07.291 Received shutdown signal, test time was about 10.000000 seconds 00:28:07.291 00:28:07.291 Latency(us) 00:28:07.291 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:07.291 =================================================================================================================== 00:28:07.291 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:07.291 19:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@975 -- # wait 1309989 00:28:07.550 19:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:07.807 19:57:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:08.065 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58e0571-db97-4219-9f8f-41787cd9e54e 00:28:08.065 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:28:08.323 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:28:08.323 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:28:08.323 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1306900 00:28:08.323 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1306900 00:28:08.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1306900 Killed "${NVMF_APP[@]}" "$@" 00:28:08.323 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:28:08.323 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:28:08.323 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:28:08.323 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@725 -- # xtrace_disable 00:28:08.323 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:08.323 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@485 -- # nvmfpid=1311436 00:28:08.323 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:28:08.324 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@486 -- # waitforlisten 1311436 00:28:08.324 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # '[' -z 1311436 ']' 00:28:08.324 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.324 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local max_retries=100 00:28:08.324 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.324 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@841 -- # xtrace_disable 00:28:08.324 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:08.324 [2024-07-24 19:57:25.579128] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:08.324 [2024-07-24 19:57:25.580425] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:28:08.324 [2024-07-24 19:57:25.580503] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.324 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.324 [2024-07-24 19:57:25.655959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.582 [2024-07-24 19:57:25.776555] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.582 [2024-07-24 19:57:25.776612] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.582 [2024-07-24 19:57:25.776637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:08.582 [2024-07-24 19:57:25.776651] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:08.582 [2024-07-24 19:57:25.776663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.582 [2024-07-24 19:57:25.776698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:08.582 [2024-07-24 19:57:25.869094] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:08.582 [2024-07-24 19:57:25.869441] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:08.582 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:28:08.582 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@865 -- # return 0 00:28:08.582 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:28:08.583 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@731 -- # xtrace_disable 00:28:08.583 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:08.583 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:08.583 19:57:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:08.841 [2024-07-24 19:57:26.155594] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:08.841 [2024-07-24 19:57:26.155725] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:08.841 [2024-07-24 19:57:26.155774] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:08.841 19:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:28:08.841 19:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4f53b2cc-7304-4ea7-814e-066d50ed51f0 00:28:08.841 19:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_name=4f53b2cc-7304-4ea7-814e-066d50ed51f0 00:28:08.841 19:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:28:08.841 19:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local i 00:28:08.841 19:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:28:08.841 19:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:28:08.841 19:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:09.100 19:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4f53b2cc-7304-4ea7-814e-066d50ed51f0 -t 2000 00:28:09.358 [ 00:28:09.358 { 00:28:09.358 "name": "4f53b2cc-7304-4ea7-814e-066d50ed51f0", 00:28:09.358 "aliases": [ 00:28:09.358 "lvs/lvol" 00:28:09.358 ], 00:28:09.358 "product_name": "Logical Volume", 00:28:09.358 "block_size": 4096, 00:28:09.358 "num_blocks": 38912, 00:28:09.358 "uuid": "4f53b2cc-7304-4ea7-814e-066d50ed51f0", 00:28:09.358 "assigned_rate_limits": { 00:28:09.358 "rw_ios_per_sec": 0, 00:28:09.358 "rw_mbytes_per_sec": 0, 00:28:09.358 "r_mbytes_per_sec": 0, 00:28:09.358 "w_mbytes_per_sec": 0 00:28:09.358 }, 00:28:09.358 "claimed": false, 00:28:09.358 "zoned": false, 00:28:09.358 "supported_io_types": { 00:28:09.358 "read": true, 00:28:09.358 "write": true, 00:28:09.358 "unmap": true, 00:28:09.358 "flush": false, 00:28:09.358 "reset": true, 00:28:09.358 "nvme_admin": false, 00:28:09.358 "nvme_io": false, 00:28:09.358 "nvme_io_md": false, 00:28:09.358 "write_zeroes": true, 00:28:09.358 "zcopy": false, 00:28:09.358 "get_zone_info": false, 00:28:09.358 "zone_management": false, 00:28:09.358 "zone_append": false, 00:28:09.358 "compare": false, 00:28:09.358 "compare_and_write": false, 00:28:09.358 "abort": false, 00:28:09.358 "seek_hole": true, 00:28:09.358 "seek_data": true, 00:28:09.358 "copy": false, 00:28:09.358 "nvme_iov_md": false 00:28:09.358 }, 00:28:09.358 "driver_specific": { 00:28:09.358 "lvol": { 00:28:09.358 "lvol_store_uuid": "a58e0571-db97-4219-9f8f-41787cd9e54e", 00:28:09.358 "base_bdev": "aio_bdev", 00:28:09.358 "thin_provision": false, 00:28:09.358 "num_allocated_clusters": 38, 00:28:09.358 "snapshot": false, 00:28:09.358 "clone": false, 00:28:09.358 "esnap_clone": false 00:28:09.358 } 00:28:09.358 } 00:28:09.358 } 00:28:09.358 ] 00:28:09.358 19:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # return 0 00:28:09.358 19:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58e0571-db97-4219-9f8f-41787cd9e54e 00:28:09.358 19:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:28:09.618 19:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:28:09.618 19:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58e0571-db97-4219-9f8f-41787cd9e54e 00:28:09.618 19:57:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:28:09.878 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:28:09.878 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:10.138 [2024-07-24 19:57:27.437228] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:28:10.138 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58e0571-db97-4219-9f8f-41787cd9e54e 00:28:10.138 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # local es=0 00:28:10.138 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58e0571-db97-4219-9f8f-41787cd9e54e 00:28:10.138 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@639 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:10.138 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:28:10.138 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:10.138 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:28:10.138 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:10.138 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:28:10.138 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:10.138 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:28:10.138 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58e0571-db97-4219-9f8f-41787cd9e54e 00:28:10.396 request: 00:28:10.396 { 00:28:10.396 "uuid": "a58e0571-db97-4219-9f8f-41787cd9e54e", 00:28:10.396 "method": "bdev_lvol_get_lvstores", 00:28:10.396 "req_id": 1 00:28:10.396 } 00:28:10.396 Got JSON-RPC error response 00:28:10.396 response: 00:28:10.396 { 00:28:10.396 "code": -19, 00:28:10.396 "message": "No such device" 00:28:10.396 } 00:28:10.396 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # es=1 00:28:10.396 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:28:10.396 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:28:10.396 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:28:10.396 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:28:10.655 aio_bdev 00:28:10.655 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4f53b2cc-7304-4ea7-814e-066d50ed51f0 00:28:10.655 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_name=4f53b2cc-7304-4ea7-814e-066d50ed51f0 00:28:10.655 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_timeout= 00:28:10.655 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local i 00:28:10.655 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # [[ -z '' ]] 00:28:10.655 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # bdev_timeout=2000 00:28:10.655 19:57:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:28:10.913 19:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4f53b2cc-7304-4ea7-814e-066d50ed51f0 -t 2000 00:28:11.171 [ 00:28:11.171 { 00:28:11.171 "name": "4f53b2cc-7304-4ea7-814e-066d50ed51f0", 00:28:11.171 "aliases": [ 00:28:11.171 "lvs/lvol" 00:28:11.171 ], 00:28:11.171 "product_name": "Logical Volume", 00:28:11.171 "block_size": 4096, 00:28:11.171 "num_blocks": 38912, 00:28:11.171 "uuid": "4f53b2cc-7304-4ea7-814e-066d50ed51f0", 00:28:11.171 "assigned_rate_limits": { 00:28:11.171 "rw_ios_per_sec": 0, 00:28:11.171 "rw_mbytes_per_sec": 0, 00:28:11.171 "r_mbytes_per_sec": 0, 00:28:11.171 "w_mbytes_per_sec": 0 00:28:11.171 }, 00:28:11.171 "claimed": false, 00:28:11.171 "zoned": false, 00:28:11.171 "supported_io_types": { 00:28:11.171 "read": true, 00:28:11.171 "write": true, 00:28:11.171 "unmap": true, 00:28:11.171 "flush": false, 00:28:11.171 "reset": true, 00:28:11.171 "nvme_admin": false, 00:28:11.171 "nvme_io": false, 00:28:11.171 "nvme_io_md": false, 00:28:11.171 "write_zeroes": true, 00:28:11.171 "zcopy": false, 00:28:11.171 "get_zone_info": false, 00:28:11.171 "zone_management": false, 00:28:11.171 "zone_append": false, 00:28:11.171 "compare": false, 00:28:11.172 "compare_and_write": false, 00:28:11.172 "abort": false, 00:28:11.172 "seek_hole": true, 00:28:11.172 "seek_data": true, 00:28:11.172 "copy": false, 00:28:11.172 "nvme_iov_md": false 00:28:11.172 }, 00:28:11.172 "driver_specific": { 00:28:11.172 "lvol": { 00:28:11.172 "lvol_store_uuid": "a58e0571-db97-4219-9f8f-41787cd9e54e", 00:28:11.172 "base_bdev": "aio_bdev", 00:28:11.172 "thin_provision": false, 00:28:11.172 "num_allocated_clusters": 38, 00:28:11.172 "snapshot": false, 00:28:11.172 "clone": false, 00:28:11.172 "esnap_clone": false 00:28:11.172 } 00:28:11.172 } 00:28:11.172 } 00:28:11.172 ] 00:28:11.172 19:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # return 0 00:28:11.172 19:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58e0571-db97-4219-9f8f-41787cd9e54e 00:28:11.172 19:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:28:11.432 19:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:28:11.432 19:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a58e0571-db97-4219-9f8f-41787cd9e54e 00:28:11.432 19:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:28:11.692 19:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:28:11.692 19:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4f53b2cc-7304-4ea7-814e-066d50ed51f0 00:28:11.952 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a58e0571-db97-4219-9f8f-41787cd9e54e 00:28:12.212 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:28:12.472 00:28:12.472 real 0m19.037s 00:28:12.472 user 0m36.592s 00:28:12.472 sys 0m5.300s 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # xtrace_disable 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:28:12.472 ************************************ 00:28:12.472 END TEST lvs_grow_dirty 00:28:12.472 ************************************ 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # type=--id 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # id=0 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # '[' --id = --pid ']' 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # shm_files=nvmf_trace.0 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # [[ -z nvmf_trace.0 ]] 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # for n in $shm_files 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:28:12.472 nvmf_trace.0 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # return 0 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # nvmfcleanup 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:12.472 rmmod nvme_tcp 00:28:12.472 rmmod nvme_fabrics 00:28:12.472 rmmod nvme_keyring 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:12.472 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:28:12.731 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:28:12.731 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # '[' -n 1311436 ']' 00:28:12.731 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@494 -- # killprocess 1311436 00:28:12.731 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' -z 1311436 ']' 00:28:12.731 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # kill -0 1311436 00:28:12.731 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # uname 00:28:12.731 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:28:12.731 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1311436 00:28:12.731 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:28:12.731 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:28:12.731 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1311436' 00:28:12.731 killing process with pid 1311436 00:28:12.731 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # kill 1311436 00:28:12.731 19:57:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@975 -- # wait 1311436 00:28:12.989 19:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:28:12.989 19:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:28:12.989 19:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:28:12.989 19:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.989 19:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@282 -- # remove_spdk_ns 00:28:12.989 19:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.989 19:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:12.989 19:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.897 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:28:14.897 00:28:14.897 real 0m41.821s 00:28:14.897 user 0m55.143s 00:28:14.897 sys 0m9.074s 00:28:14.897 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # xtrace_disable 00:28:14.897 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:28:14.897 ************************************ 00:28:14.897 END TEST nvmf_lvs_grow 00:28:14.897 ************************************ 00:28:14.897 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:14.897 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:28:14.897 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1108 -- # xtrace_disable 00:28:14.897 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:14.897 ************************************ 00:28:14.897 START TEST nvmf_bdev_io_wait 00:28:14.897 ************************************ 00:28:14.897 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:28:14.897 * Looking for test storage... 00:28:14.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:14.897 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.897 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:28:15.156 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@452 -- # prepare_net_devs 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # local -g is_hw=no 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # remove_spdk_ns 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # xtrace_disable 00:28:15.157 19:57:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # pci_devs=() 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -a pci_devs 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # pci_net_devs=() 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # pci_drivers=() 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -A pci_drivers 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # net_devs=() 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # local -ga net_devs 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # e810=() 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # local -ga e810 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # x722=() 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # local -ga x722 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # mlx=() 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # local -ga mlx 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:17.111 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:17.112 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:17.112 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # [[ up == up ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:17.112 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # [[ up == up ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:17.112 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # is_hw=yes 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:28:17.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:28:17.112 00:28:17.112 --- 10.0.0.2 ping statistics --- 00:28:17.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.112 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:28:17.112 00:28:17.112 --- 10.0.0.1 ping statistics --- 00:28:17.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.112 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # return 0 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:28:17.112 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:17.113 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:28:17.113 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@725 -- # xtrace_disable 00:28:17.113 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:17.113 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@485 -- # nvmfpid=1313923 00:28:17.113 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:28:17.113 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@486 -- # waitforlisten 1313923 00:28:17.113 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # '[' -z 1313923 ']' 00:28:17.113 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:17.113 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local max_retries=100 00:28:17.113 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:17.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:17.113 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@841 -- # xtrace_disable 00:28:17.113 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:17.113 [2024-07-24 19:57:34.391365] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:17.113 [2024-07-24 19:57:34.392454] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:28:17.113 [2024-07-24 19:57:34.392511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:17.113 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.113 [2024-07-24 19:57:34.459446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:17.371 [2024-07-24 19:57:34.584755] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:17.371 [2024-07-24 19:57:34.584824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:17.371 [2024-07-24 19:57:34.584853] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:17.371 [2024-07-24 19:57:34.584865] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:17.371 [2024-07-24 19:57:34.584875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:17.371 [2024-07-24 19:57:34.584961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.371 [2024-07-24 19:57:34.585031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.371 [2024-07-24 19:57:34.585004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:17.371 [2024-07-24 19:57:34.585025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:17.371 [2024-07-24 19:57:34.585571] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@865 -- # return 0 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@731 -- # xtrace_disable 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:17.371 [2024-07-24 19:57:34.710663] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:17.371 [2024-07-24 19:57:34.710914] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:17.371 [2024-07-24 19:57:34.711888] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:17.371 [2024-07-24 19:57:34.712827] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:17.371 [2024-07-24 19:57:34.717763] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:17.371 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:17.630 Malloc0 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:17.630 [2024-07-24 19:57:34.785944] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1313973 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1313974 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1313977 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:28:17.630 { 00:28:17.630 "params": { 00:28:17.630 "name": "Nvme$subsystem", 00:28:17.630 "trtype": "$TEST_TRANSPORT", 00:28:17.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.630 "adrfam": "ipv4", 00:28:17.630 "trsvcid": "$NVMF_PORT", 00:28:17.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.630 "hdgst": ${hdgst:-false}, 00:28:17.630 "ddgst": ${ddgst:-false} 00:28:17.630 }, 00:28:17.630 "method": "bdev_nvme_attach_controller" 00:28:17.630 } 00:28:17.630 EOF 00:28:17.630 )") 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:28:17.630 { 00:28:17.630 "params": { 00:28:17.630 "name": "Nvme$subsystem", 00:28:17.630 "trtype": "$TEST_TRANSPORT", 00:28:17.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.630 "adrfam": "ipv4", 00:28:17.630 "trsvcid": "$NVMF_PORT", 00:28:17.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.630 "hdgst": ${hdgst:-false}, 00:28:17.630 "ddgst": ${ddgst:-false} 00:28:17.630 }, 00:28:17.630 "method": "bdev_nvme_attach_controller" 00:28:17.630 } 00:28:17.630 EOF 00:28:17.630 )") 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1313979 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:28:17.630 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:28:17.630 { 00:28:17.630 "params": { 00:28:17.631 "name": "Nvme$subsystem", 00:28:17.631 "trtype": "$TEST_TRANSPORT", 00:28:17.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.631 "adrfam": "ipv4", 00:28:17.631 "trsvcid": "$NVMF_PORT", 00:28:17.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.631 "hdgst": ${hdgst:-false}, 00:28:17.631 "ddgst": ${ddgst:-false} 00:28:17.631 }, 00:28:17.631 "method": "bdev_nvme_attach_controller" 00:28:17.631 } 00:28:17.631 EOF 00:28:17.631 )") 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # config=() 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@536 -- # local subsystem config 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:28:17.631 { 00:28:17.631 "params": { 00:28:17.631 "name": "Nvme$subsystem", 00:28:17.631 "trtype": "$TEST_TRANSPORT", 00:28:17.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:17.631 "adrfam": "ipv4", 00:28:17.631 "trsvcid": "$NVMF_PORT", 00:28:17.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:17.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:17.631 "hdgst": ${hdgst:-false}, 00:28:17.631 "ddgst": ${ddgst:-false} 00:28:17.631 }, 00:28:17.631 "method": "bdev_nvme_attach_controller" 00:28:17.631 } 00:28:17.631 EOF 00:28:17.631 )") 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1313973 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # cat 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # jq . 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:28:17.631 "params": { 00:28:17.631 "name": "Nvme1", 00:28:17.631 "trtype": "tcp", 00:28:17.631 "traddr": "10.0.0.2", 00:28:17.631 "adrfam": "ipv4", 00:28:17.631 "trsvcid": "4420", 00:28:17.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:17.631 "hdgst": false, 00:28:17.631 "ddgst": false 00:28:17.631 }, 00:28:17.631 "method": "bdev_nvme_attach_controller" 00:28:17.631 }' 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:28:17.631 "params": { 00:28:17.631 "name": "Nvme1", 00:28:17.631 "trtype": "tcp", 00:28:17.631 "traddr": "10.0.0.2", 00:28:17.631 "adrfam": "ipv4", 00:28:17.631 "trsvcid": "4420", 00:28:17.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:17.631 "hdgst": false, 00:28:17.631 "ddgst": false 00:28:17.631 }, 00:28:17.631 "method": "bdev_nvme_attach_controller" 00:28:17.631 }' 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:28:17.631 "params": { 00:28:17.631 "name": "Nvme1", 00:28:17.631 "trtype": "tcp", 00:28:17.631 "traddr": "10.0.0.2", 00:28:17.631 "adrfam": "ipv4", 00:28:17.631 "trsvcid": "4420", 00:28:17.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:17.631 "hdgst": false, 00:28:17.631 "ddgst": false 00:28:17.631 }, 00:28:17.631 "method": "bdev_nvme_attach_controller" 00:28:17.631 }' 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@561 -- # IFS=, 00:28:17.631 19:57:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:28:17.631 "params": { 00:28:17.631 "name": "Nvme1", 00:28:17.631 "trtype": "tcp", 00:28:17.631 "traddr": "10.0.0.2", 00:28:17.631 "adrfam": "ipv4", 00:28:17.631 "trsvcid": "4420", 00:28:17.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:17.631 "hdgst": false, 00:28:17.631 "ddgst": false 00:28:17.631 }, 00:28:17.631 "method": "bdev_nvme_attach_controller" 00:28:17.631 }' 00:28:17.631 [2024-07-24 19:57:34.835834] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:28:17.631 [2024-07-24 19:57:34.835836] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:28:17.631 [2024-07-24 19:57:34.835838] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:28:17.631 [2024-07-24 19:57:34.835836] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:28:17.631 [2024-07-24 19:57:34.835939] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 [2024-07-24 19:57:34.835940] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 19:57:34.835939] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 19:57:34.835939] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:28:17.631 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:17.631 --proc-type=auto ] 00:28:17.631 --proc-type=auto ] 00:28:17.631 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.631 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.631 [2024-07-24 19:57:35.001334] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.889 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.889 [2024-07-24 19:57:35.097632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:17.889 [2024-07-24 19:57:35.097760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.889 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.889 [2024-07-24 19:57:35.194766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:28:17.889 [2024-07-24 19:57:35.197238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.147 [2024-07-24 19:57:35.296786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:18.147 [2024-07-24 19:57:35.299270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.147 [2024-07-24 19:57:35.399189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:18.147 Running I/O for 1 seconds... 00:28:18.405 Running I/O for 1 seconds... 00:28:18.405 Running I/O for 1 seconds... 00:28:18.405 Running I/O for 1 seconds... 00:28:19.337 00:28:19.337 Latency(us) 00:28:19.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.337 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:28:19.337 Nvme1n1 : 1.02 6915.28 27.01 0.00 0.00 18324.88 2852.03 27185.30 00:28:19.337 =================================================================================================================== 00:28:19.337 Total : 6915.28 27.01 0.00 0.00 18324.88 2852.03 27185.30 00:28:19.337 00:28:19.338 Latency(us) 00:28:19.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.338 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:28:19.338 Nvme1n1 : 1.00 190436.59 743.89 0.00 0.00 669.47 280.65 892.02 00:28:19.338 =================================================================================================================== 00:28:19.338 Total : 190436.59 743.89 0.00 0.00 669.47 280.65 892.02 00:28:19.338 00:28:19.338 Latency(us) 00:28:19.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.338 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:28:19.338 Nvme1n1 : 1.01 6250.03 24.41 0.00 0.00 20388.79 8204.14 33010.73 00:28:19.338 =================================================================================================================== 00:28:19.338 Total : 6250.03 24.41 0.00 0.00 20388.79 8204.14 33010.73 00:28:19.338 00:28:19.338 Latency(us) 00:28:19.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:19.338 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:28:19.338 Nvme1n1 : 1.01 9536.68 37.25 0.00 0.00 13368.50 2657.85 19029.71 00:28:19.338 =================================================================================================================== 00:28:19.338 Total : 9536.68 37.25 0.00 0.00 13368.50 2657.85 19029.71 00:28:19.595 19:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1313974 00:28:19.595 19:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1313977 00:28:19.595 19:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1313979 00:28:19.595 19:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:19.595 19:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:19.595 19:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:19.595 19:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:19.595 19:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:28:19.595 19:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:28:19.595 19:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # nvmfcleanup 00:28:19.595 19:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:28:19.595 19:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:19.595 19:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:28:19.595 19:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:19.595 19:57:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:19.854 rmmod nvme_tcp 00:28:19.854 rmmod nvme_fabrics 00:28:19.854 rmmod nvme_keyring 00:28:19.854 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:19.854 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:28:19.854 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:28:19.854 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # '[' -n 1313923 ']' 00:28:19.854 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # killprocess 1313923 00:28:19.854 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' -z 1313923 ']' 00:28:19.854 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # kill -0 1313923 00:28:19.854 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # uname 00:28:19.854 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:28:19.854 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1313923 00:28:19.854 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:28:19.854 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:28:19.854 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1313923' 00:28:19.854 killing process with pid 1313923 00:28:19.854 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # kill 1313923 00:28:19.854 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@975 -- # wait 1313923 00:28:20.113 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:28:20.113 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:28:20.113 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:28:20.113 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:20.113 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@282 -- # remove_spdk_ns 00:28:20.113 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.113 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.113 19:57:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.015 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:28:22.015 00:28:22.015 real 0m7.135s 00:28:22.015 user 0m15.306s 00:28:22.015 sys 0m4.083s 00:28:22.015 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # xtrace_disable 00:28:22.015 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:28:22.015 ************************************ 00:28:22.015 END TEST nvmf_bdev_io_wait 00:28:22.015 ************************************ 00:28:22.015 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:22.015 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:28:22.015 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1108 -- # xtrace_disable 00:28:22.015 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:22.274 ************************************ 00:28:22.274 START TEST nvmf_queue_depth 00:28:22.274 ************************************ 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:28:22.274 * Looking for test storage... 00:28:22.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:22.274 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@452 -- # prepare_net_devs 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@414 -- # local -g is_hw=no 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # remove_spdk_ns 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@289 -- # xtrace_disable 00:28:22.275 19:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@295 -- # pci_devs=() 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -a pci_devs 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@296 -- # pci_net_devs=() 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # pci_drivers=() 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -A pci_drivers 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # net_devs=() 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # local -ga net_devs 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # e810=() 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # local -ga e810 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@301 -- # x722=() 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@301 -- # local -ga x722 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # mlx=() 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # local -ga mlx 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:24.175 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:24.175 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@394 -- # [[ up == up ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:24.175 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@394 -- # [[ up == up ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:24.175 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # is_hw=yes 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:28:24.175 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:28:24.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:24.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:28:24.434 00:28:24.434 --- 10.0.0.2 ping statistics --- 00:28:24.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.434 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:24.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:24.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:28:24.434 00:28:24.434 --- 10.0.0.1 ping statistics --- 00:28:24.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:24.434 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@426 -- # return 0 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@725 -- # xtrace_disable 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@485 -- # nvmfpid=1316189 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@486 -- # waitforlisten 1316189 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@832 -- # '[' -z 1316189 ']' 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local max_retries=100 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@841 -- # xtrace_disable 00:28:24.434 19:57:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:24.434 [2024-07-24 19:57:41.658266] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:24.434 [2024-07-24 19:57:41.659351] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:28:24.434 [2024-07-24 19:57:41.659406] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.434 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.434 [2024-07-24 19:57:41.726923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.693 [2024-07-24 19:57:41.842153] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.693 [2024-07-24 19:57:41.842210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.693 [2024-07-24 19:57:41.842236] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.693 [2024-07-24 19:57:41.842259] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.693 [2024-07-24 19:57:41.842272] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.693 [2024-07-24 19:57:41.842305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.693 [2024-07-24 19:57:41.939765] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:24.693 [2024-07-24 19:57:41.940123] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:25.260 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:28:25.260 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@865 -- # return 0 00:28:25.260 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:28:25.260 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@731 -- # xtrace_disable 00:28:25.260 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:25.260 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.260 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:25.260 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:25.260 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:25.260 [2024-07-24 19:57:42.610935] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.260 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:25.260 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:25.260 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:25.260 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:25.519 Malloc0 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:25.519 [2024-07-24 19:57:42.675035] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1316341 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1316341 /var/tmp/bdevperf.sock 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@832 -- # '[' -z 1316341 ']' 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local max_retries=100 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:25.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@841 -- # xtrace_disable 00:28:25.519 19:57:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:25.519 [2024-07-24 19:57:42.721895] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:28:25.519 [2024-07-24 19:57:42.721971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1316341 ] 00:28:25.519 EAL: No free 2048 kB hugepages reported on node 1 00:28:25.519 [2024-07-24 19:57:42.782772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.777 [2024-07-24 19:57:42.899873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.777 19:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:28:25.777 19:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@865 -- # return 0 00:28:25.777 19:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:25.777 19:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:25.777 19:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:26.035 NVMe0n1 00:28:26.035 19:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:26.035 19:57:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:26.035 Running I/O for 10 seconds... 00:28:38.224 00:28:38.224 Latency(us) 00:28:38.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.224 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:28:38.224 Verification LBA range: start 0x0 length 0x4000 00:28:38.224 NVMe0n1 : 10.09 8355.10 32.64 0.00 0.00 121938.24 24369.68 76507.21 00:28:38.224 =================================================================================================================== 00:28:38.224 Total : 8355.10 32.64 0.00 0.00 121938.24 24369.68 76507.21 00:28:38.224 0 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1316341 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' -z 1316341 ']' 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # kill -0 1316341 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # uname 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1316341 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1316341' 00:28:38.224 killing process with pid 1316341 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # kill 1316341 00:28:38.224 Received shutdown signal, test time was about 10.000000 seconds 00:28:38.224 00:28:38.224 Latency(us) 00:28:38.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.224 =================================================================================================================== 00:28:38.224 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@975 -- # wait 1316341 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # nvmfcleanup 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:38.224 rmmod nvme_tcp 00:28:38.224 rmmod nvme_fabrics 00:28:38.224 rmmod nvme_keyring 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # '[' -n 1316189 ']' 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@494 -- # killprocess 1316189 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' -z 1316189 ']' 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@955 -- # kill -0 1316189 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # uname 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1316189 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1316189' 00:28:38.224 killing process with pid 1316189 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # kill 1316189 00:28:38.224 19:57:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@975 -- # wait 1316189 00:28:38.224 19:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:28:38.224 19:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:28:38.224 19:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:28:38.224 19:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:38.224 19:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@282 -- # remove_spdk_ns 00:28:38.224 19:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.224 19:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.224 19:57:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:28:39.159 00:28:39.159 real 0m16.765s 00:28:39.159 user 0m22.495s 00:28:39.159 sys 0m3.388s 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # xtrace_disable 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:28:39.159 ************************************ 00:28:39.159 END TEST nvmf_queue_depth 00:28:39.159 ************************************ 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1108 -- # xtrace_disable 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:39.159 ************************************ 00:28:39.159 START TEST nvmf_target_multipath 00:28:39.159 ************************************ 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:28:39.159 * Looking for test storage... 00:28:39.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@452 -- # prepare_net_devs 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@414 -- # local -g is_hw=no 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # remove_spdk_ns 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@289 -- # xtrace_disable 00:28:39.159 19:57:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:41.060 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.060 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@295 -- # pci_devs=() 00:28:41.060 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -a pci_devs 00:28:41.060 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@296 -- # pci_net_devs=() 00:28:41.060 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:28:41.060 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # pci_drivers=() 00:28:41.060 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -A pci_drivers 00:28:41.060 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # net_devs=() 00:28:41.060 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # local -ga net_devs 00:28:41.060 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # e810=() 00:28:41.060 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # local -ga e810 00:28:41.060 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@301 -- # x722=() 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@301 -- # local -ga x722 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # mlx=() 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # local -ga mlx 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:41.061 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:41.061 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@394 -- # [[ up == up ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:41.061 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@394 -- # [[ up == up ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:41.061 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # is_hw=yes 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:28:41.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:28:41.061 00:28:41.061 --- 10.0.0.2 ping statistics --- 00:28:41.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.061 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:28:41.061 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:28:41.061 00:28:41.061 --- 10.0.0.1 ping statistics --- 00:28:41.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.061 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@426 -- # return 0 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:28:41.062 only one NIC for nvmf test 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # nvmfcleanup 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:41.062 rmmod nvme_tcp 00:28:41.062 rmmod nvme_fabrics 00:28:41.062 rmmod nvme_keyring 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # '[' -n '' ']' 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@282 -- # remove_spdk_ns 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:41.062 19:57:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # nvmfcleanup 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # '[' -n '' ']' 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@282 -- # remove_spdk_ns 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:28:43.605 00:28:43.605 real 0m4.153s 00:28:43.605 user 0m0.742s 00:28:43.605 sys 0m1.397s 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # xtrace_disable 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:28:43.605 ************************************ 00:28:43.605 END TEST nvmf_target_multipath 00:28:43.605 ************************************ 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1108 -- # xtrace_disable 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:43.605 ************************************ 00:28:43.605 START TEST nvmf_zcopy 00:28:43.605 ************************************ 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:28:43.605 * Looking for test storage... 00:28:43.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.605 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@452 -- # prepare_net_devs 00:28:43.606 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@414 -- # local -g is_hw=no 00:28:43.606 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # remove_spdk_ns 00:28:43.606 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.606 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.606 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.606 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:28:43.606 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:28:43.606 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@289 -- # xtrace_disable 00:28:43.606 19:58:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@295 -- # pci_devs=() 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@295 -- # local -a pci_devs 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@296 -- # pci_net_devs=() 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # pci_drivers=() 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # local -A pci_drivers 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # net_devs=() 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # local -ga net_devs 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # e810=() 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # local -ga e810 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@301 -- # x722=() 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@301 -- # local -ga x722 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # mlx=() 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # local -ga mlx 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:45.504 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:45.504 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # [[ up == up ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:45.504 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # [[ up == up ]] 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:45.504 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # is_hw=yes 00:28:45.504 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:28:45.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:45.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:28:45.505 00:28:45.505 --- 10.0.0.2 ping statistics --- 00:28:45.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.505 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:45.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:45.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:28:45.505 00:28:45.505 --- 10.0.0.1 ping statistics --- 00:28:45.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:45.505 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@426 -- # return 0 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@725 -- # xtrace_disable 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@485 -- # nvmfpid=1321393 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@486 -- # waitforlisten 1321393 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@832 -- # '[' -z 1321393 ']' 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local max_retries=100 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:45.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@841 -- # xtrace_disable 00:28:45.505 19:58:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:45.505 [2024-07-24 19:58:02.721009] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:45.505 [2024-07-24 19:58:02.722149] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:28:45.505 [2024-07-24 19:58:02.722216] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:45.505 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.505 [2024-07-24 19:58:02.792378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.762 [2024-07-24 19:58:02.910622] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:45.762 [2024-07-24 19:58:02.910675] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:45.762 [2024-07-24 19:58:02.910691] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:45.762 [2024-07-24 19:58:02.910705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:45.762 [2024-07-24 19:58:02.910717] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:45.762 [2024-07-24 19:58:02.910746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:45.763 [2024-07-24 19:58:03.000991] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:45.763 [2024-07-24 19:58:03.001335] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@865 -- # return 0 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@731 -- # xtrace_disable 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:45.763 [2024-07-24 19:58:03.047378] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:45.763 [2024-07-24 19:58:03.063552] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:45.763 malloc0 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@536 -- # config=() 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@536 -- # local subsystem config 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:28:45.763 { 00:28:45.763 "params": { 00:28:45.763 "name": "Nvme$subsystem", 00:28:45.763 "trtype": "$TEST_TRANSPORT", 00:28:45.763 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.763 "adrfam": "ipv4", 00:28:45.763 "trsvcid": "$NVMF_PORT", 00:28:45.763 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.763 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.763 "hdgst": ${hdgst:-false}, 00:28:45.763 "ddgst": ${ddgst:-false} 00:28:45.763 }, 00:28:45.763 "method": "bdev_nvme_attach_controller" 00:28:45.763 } 00:28:45.763 EOF 00:28:45.763 )") 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # cat 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # jq . 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@561 -- # IFS=, 00:28:45.763 19:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:28:45.763 "params": { 00:28:45.763 "name": "Nvme1", 00:28:45.763 "trtype": "tcp", 00:28:45.763 "traddr": "10.0.0.2", 00:28:45.763 "adrfam": "ipv4", 00:28:45.763 "trsvcid": "4420", 00:28:45.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:45.763 "hdgst": false, 00:28:45.763 "ddgst": false 00:28:45.763 }, 00:28:45.763 "method": "bdev_nvme_attach_controller" 00:28:45.763 }' 00:28:46.020 [2024-07-24 19:58:03.158035] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:28:46.020 [2024-07-24 19:58:03.158117] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1321522 ] 00:28:46.020 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.020 [2024-07-24 19:58:03.219802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.020 [2024-07-24 19:58:03.336385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.277 Running I/O for 10 seconds... 00:28:56.239 00:28:56.239 Latency(us) 00:28:56.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.239 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:28:56.239 Verification LBA range: start 0x0 length 0x1000 00:28:56.239 Nvme1n1 : 10.01 5694.07 44.48 0.00 0.00 22415.87 761.55 30680.56 00:28:56.239 =================================================================================================================== 00:28:56.239 Total : 5694.07 44.48 0.00 0.00 22415.87 761.55 30680.56 00:28:56.496 19:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1322707 00:28:56.496 19:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:28:56.496 19:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:28:56.496 19:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:28:56.496 19:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:28:56.496 19:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@536 -- # config=() 00:28:56.496 19:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@536 -- # local subsystem config 00:28:56.496 19:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:28:56.496 19:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:28:56.496 { 00:28:56.496 "params": { 00:28:56.496 "name": "Nvme$subsystem", 00:28:56.496 "trtype": "$TEST_TRANSPORT", 00:28:56.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.496 "adrfam": "ipv4", 00:28:56.496 "trsvcid": "$NVMF_PORT", 00:28:56.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.496 "hdgst": ${hdgst:-false}, 00:28:56.496 "ddgst": ${ddgst:-false} 00:28:56.496 }, 00:28:56.496 "method": "bdev_nvme_attach_controller" 00:28:56.496 } 00:28:56.496 EOF 00:28:56.496 )") 00:28:56.496 19:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@558 -- # cat 00:28:56.496 19:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # jq . 00:28:56.496 [2024-07-24 19:58:13.831273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.496 [2024-07-24 19:58:13.831313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.496 19:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@561 -- # IFS=, 00:28:56.496 19:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:28:56.496 "params": { 00:28:56.496 "name": "Nvme1", 00:28:56.496 "trtype": "tcp", 00:28:56.496 "traddr": "10.0.0.2", 00:28:56.496 "adrfam": "ipv4", 00:28:56.496 "trsvcid": "4420", 00:28:56.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:56.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:56.496 "hdgst": false, 00:28:56.496 "ddgst": false 00:28:56.496 }, 00:28:56.496 "method": "bdev_nvme_attach_controller" 00:28:56.496 }' 00:28:56.496 [2024-07-24 19:58:13.839185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.497 [2024-07-24 19:58:13.839206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.497 [2024-07-24 19:58:13.847184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.497 [2024-07-24 19:58:13.847203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.497 [2024-07-24 19:58:13.855188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.497 [2024-07-24 19:58:13.855209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.497 [2024-07-24 19:58:13.863188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.497 [2024-07-24 19:58:13.863209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.497 [2024-07-24 19:58:13.869628] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:28:56.497 [2024-07-24 19:58:13.869709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1322707 ] 00:28:56.497 [2024-07-24 19:58:13.871183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.497 [2024-07-24 19:58:13.871204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:13.879186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.879205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:13.887184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.887203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:13.895183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.895202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.755 [2024-07-24 19:58:13.903190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.903212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:13.911185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.911204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:13.919185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.919204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:13.927186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.927205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:13.929453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.755 [2024-07-24 19:58:13.935206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.935253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:13.943234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.943276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:13.951188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.951207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:13.959187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.959206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:13.967189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.967209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:13.975185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.975205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:13.983188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.983208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:13.991186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.991206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:13.999214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:13.999264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:14.007207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:14.007256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:14.015186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:14.015206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:14.023200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:14.023235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:14.031185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.755 [2024-07-24 19:58:14.031204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.755 [2024-07-24 19:58:14.039185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.756 [2024-07-24 19:58:14.039204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.756 [2024-07-24 19:58:14.047185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.756 [2024-07-24 19:58:14.047204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.756 [2024-07-24 19:58:14.054754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.756 [2024-07-24 19:58:14.055203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.756 [2024-07-24 19:58:14.055236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.756 [2024-07-24 19:58:14.063185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.756 [2024-07-24 19:58:14.063204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.756 [2024-07-24 19:58:14.071213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.756 [2024-07-24 19:58:14.071264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.756 [2024-07-24 19:58:14.079220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.756 [2024-07-24 19:58:14.079278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.756 [2024-07-24 19:58:14.087216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.756 [2024-07-24 19:58:14.087272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.756 [2024-07-24 19:58:14.095217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.756 [2024-07-24 19:58:14.095286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.756 [2024-07-24 19:58:14.103221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.756 [2024-07-24 19:58:14.103291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.756 [2024-07-24 19:58:14.111234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.756 [2024-07-24 19:58:14.111279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.756 [2024-07-24 19:58:14.119206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.756 [2024-07-24 19:58:14.119258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:56.756 [2024-07-24 19:58:14.127203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:56.756 [2024-07-24 19:58:14.127252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.013 [2024-07-24 19:58:14.135228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.013 [2024-07-24 19:58:14.135272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.013 [2024-07-24 19:58:14.143235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.013 [2024-07-24 19:58:14.143277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.013 [2024-07-24 19:58:14.151187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.013 [2024-07-24 19:58:14.151206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.013 [2024-07-24 19:58:14.159186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.159206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.167210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.167260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.175202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.175240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.183200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.183237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.191201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.191238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.199235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.199279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.207200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.207238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.215199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.215220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.223202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.223251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.231197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.231219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 Running I/O for 5 seconds... 00:28:57.014 [2024-07-24 19:58:14.246635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.246664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.258953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.258981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.273720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.273753] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.283252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.283279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.294487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.294515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.308304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.308340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.317572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.317598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.329012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.329039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.338467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.338494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.351299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.351326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.360602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.360627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.371685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.371712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.014 [2024-07-24 19:58:14.382175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.014 [2024-07-24 19:58:14.382203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.271 [2024-07-24 19:58:14.395663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.271 [2024-07-24 19:58:14.395690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.271 [2024-07-24 19:58:14.404996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.271 [2024-07-24 19:58:14.405022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.271 [2024-07-24 19:58:14.416595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.271 [2024-07-24 19:58:14.416619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.271 [2024-07-24 19:58:14.426069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.271 [2024-07-24 19:58:14.426094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.271 [2024-07-24 19:58:14.436742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.271 [2024-07-24 19:58:14.436768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.271 [2024-07-24 19:58:14.447174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.271 [2024-07-24 19:58:14.447200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.271 [2024-07-24 19:58:14.457658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.271 [2024-07-24 19:58:14.457697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.271 [2024-07-24 19:58:14.474792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.271 [2024-07-24 19:58:14.474833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.271 [2024-07-24 19:58:14.484115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.271 [2024-07-24 19:58:14.484141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.271 [2024-07-24 19:58:14.495302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.271 [2024-07-24 19:58:14.495328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.271 [2024-07-24 19:58:14.506047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.271 [2024-07-24 19:58:14.506071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.271 [2024-07-24 19:58:14.523494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.271 [2024-07-24 19:58:14.523535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.271 [2024-07-24 19:58:14.532992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.271 [2024-07-24 19:58:14.533018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.271 [2024-07-24 19:58:14.547778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.272 [2024-07-24 19:58:14.547805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.272 [2024-07-24 19:58:14.556538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.272 [2024-07-24 19:58:14.556564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.272 [2024-07-24 19:58:14.568003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.272 [2024-07-24 19:58:14.568027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.272 [2024-07-24 19:58:14.577330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.272 [2024-07-24 19:58:14.577356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.272 [2024-07-24 19:58:14.592123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.272 [2024-07-24 19:58:14.592147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.272 [2024-07-24 19:58:14.600614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.272 [2024-07-24 19:58:14.600640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.272 [2024-07-24 19:58:14.611538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.272 [2024-07-24 19:58:14.611563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.272 [2024-07-24 19:58:14.621061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.272 [2024-07-24 19:58:14.621087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.272 [2024-07-24 19:58:14.631357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.272 [2024-07-24 19:58:14.631384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.272 [2024-07-24 19:58:14.641612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.272 [2024-07-24 19:58:14.641636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.654655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.654682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.668945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.668971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.677953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.677979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.689350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.689376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.706324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.706351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.721319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.721360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.730310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.730336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.743462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.743488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.752830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.752855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.763833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.763859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.774166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.774192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.786557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.786584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.796068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.796094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.807915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.807940] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.817545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.817570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.832066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.832095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.841799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.841830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.529 [2024-07-24 19:58:14.853098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.529 [2024-07-24 19:58:14.853127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.530 [2024-07-24 19:58:14.867839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.530 [2024-07-24 19:58:14.867866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.530 [2024-07-24 19:58:14.877056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.530 [2024-07-24 19:58:14.877082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.530 [2024-07-24 19:58:14.891680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.530 [2024-07-24 19:58:14.891706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.530 [2024-07-24 19:58:14.901040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.530 [2024-07-24 19:58:14.901067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.787 [2024-07-24 19:58:14.913005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.787 [2024-07-24 19:58:14.913034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.787 [2024-07-24 19:58:14.931437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.787 [2024-07-24 19:58:14.931463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.787 [2024-07-24 19:58:14.941173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.787 [2024-07-24 19:58:14.941202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.787 [2024-07-24 19:58:14.955681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.787 [2024-07-24 19:58:14.955707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.787 [2024-07-24 19:58:14.964789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.787 [2024-07-24 19:58:14.964815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.787 [2024-07-24 19:58:14.976379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.787 [2024-07-24 19:58:14.976420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.787 [2024-07-24 19:58:14.986252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.787 [2024-07-24 19:58:14.986276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.787 [2024-07-24 19:58:14.999532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.787 [2024-07-24 19:58:14.999558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.787 [2024-07-24 19:58:15.008868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.787 [2024-07-24 19:58:15.008898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.787 [2024-07-24 19:58:15.023861] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.787 [2024-07-24 19:58:15.023901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.787 [2024-07-24 19:58:15.033306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.787 [2024-07-24 19:58:15.033332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.787 [2024-07-24 19:58:15.049009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.788 [2024-07-24 19:58:15.049049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.788 [2024-07-24 19:58:15.066937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.788 [2024-07-24 19:58:15.066966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.788 [2024-07-24 19:58:15.076806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.788 [2024-07-24 19:58:15.076833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.788 [2024-07-24 19:58:15.088740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.788 [2024-07-24 19:58:15.088769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.788 [2024-07-24 19:58:15.098629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.788 [2024-07-24 19:58:15.098657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.788 [2024-07-24 19:58:15.110322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.788 [2024-07-24 19:58:15.110348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.788 [2024-07-24 19:58:15.123567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.788 [2024-07-24 19:58:15.123596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.788 [2024-07-24 19:58:15.132812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.788 [2024-07-24 19:58:15.132838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.788 [2024-07-24 19:58:15.144707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.788 [2024-07-24 19:58:15.144746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:57.788 [2024-07-24 19:58:15.154567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:57.788 [2024-07-24 19:58:15.154596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.168570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.168599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.177628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.177657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.191258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.191284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.200640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.200670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.212134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.212163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.227813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.227842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.237496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.237522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.249552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.249582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.265466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.265494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.282971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.282996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.292871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.292900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.307890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.307919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.317065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.317094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.331161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.331190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.340582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.340609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.352544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.352574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.371169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.371194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.380604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.380638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.392513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.392538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.402068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.402093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.046 [2024-07-24 19:58:15.413325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.046 [2024-07-24 19:58:15.413351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.304 [2024-07-24 19:58:15.426697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.304 [2024-07-24 19:58:15.426724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.304 [2024-07-24 19:58:15.435967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.304 [2024-07-24 19:58:15.435993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.304 [2024-07-24 19:58:15.447986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.304 [2024-07-24 19:58:15.448016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.304 [2024-07-24 19:58:15.458119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.304 [2024-07-24 19:58:15.458147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.304 [2024-07-24 19:58:15.471863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.304 [2024-07-24 19:58:15.471890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.304 [2024-07-24 19:58:15.481601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.304 [2024-07-24 19:58:15.481630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.304 [2024-07-24 19:58:15.495271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.304 [2024-07-24 19:58:15.495313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.304 [2024-07-24 19:58:15.505478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.304 [2024-07-24 19:58:15.505529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.304 [2024-07-24 19:58:15.519004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.304 [2024-07-24 19:58:15.519030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.304 [2024-07-24 19:58:15.528104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.304 [2024-07-24 19:58:15.528129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.304 [2024-07-24 19:58:15.539865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.304 [2024-07-24 19:58:15.539891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.304 [2024-07-24 19:58:15.557176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.304 [2024-07-24 19:58:15.557203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.304 [2024-07-24 19:58:15.572976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.304 [2024-07-24 19:58:15.573003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.304 [2024-07-24 19:58:15.582300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.304 [2024-07-24 19:58:15.582327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.305 [2024-07-24 19:58:15.594190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.305 [2024-07-24 19:58:15.594220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.305 [2024-07-24 19:58:15.607787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.305 [2024-07-24 19:58:15.607823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.305 [2024-07-24 19:58:15.616903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.305 [2024-07-24 19:58:15.616932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.305 [2024-07-24 19:58:15.632240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.305 [2024-07-24 19:58:15.632278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.305 [2024-07-24 19:58:15.641480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.305 [2024-07-24 19:58:15.641506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.305 [2024-07-24 19:58:15.655264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.305 [2024-07-24 19:58:15.655290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.305 [2024-07-24 19:58:15.664366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.305 [2024-07-24 19:58:15.664391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.305 [2024-07-24 19:58:15.676119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.305 [2024-07-24 19:58:15.676145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.691839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.691866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.701235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.701273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.712888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.712913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.731028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.731058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.740747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.740774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.752434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.752459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.772395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.772421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.783403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.783429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.794131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.794160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.809538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.809568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.827258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.827284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.836499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.836526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.847949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.847981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.857179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.857204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.868176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.868202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.878489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.878515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.891901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.891928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.901509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.901535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.913367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.913394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.931631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.931658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.563 [2024-07-24 19:58:15.941894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.563 [2024-07-24 19:58:15.941920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.821 [2024-07-24 19:58:15.954777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.821 [2024-07-24 19:58:15.954805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.821 [2024-07-24 19:58:15.963898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.821 [2024-07-24 19:58:15.963924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.821 [2024-07-24 19:58:15.975197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.821 [2024-07-24 19:58:15.975224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.821 [2024-07-24 19:58:15.985665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.821 [2024-07-24 19:58:15.985690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.821 [2024-07-24 19:58:16.003282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.821 [2024-07-24 19:58:16.003308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.821 [2024-07-24 19:58:16.012359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.821 [2024-07-24 19:58:16.012385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.821 [2024-07-24 19:58:16.023634] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.821 [2024-07-24 19:58:16.023660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.034196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.034221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.049506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.049546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.058458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.058485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.071508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.071545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.080561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.080587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.091772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.091796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.101104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.101130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.112558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.112583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.122337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.122362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.135294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.135320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.144569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.144594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.155366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.155392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.165134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.165160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.175643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.175669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.186272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.186314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:58.822 [2024-07-24 19:58:16.199113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:58.822 [2024-07-24 19:58:16.199141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.208034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.208073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.219511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.219552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.230009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.230035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.243121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.243147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.251933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.251959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.262858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.262883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.273463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.273504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.290775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.290814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.300085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.300111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.311202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.311229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.321598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.321622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.337132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.337158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.345788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.345812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.360219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.360252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.369102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.369126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.379887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.379913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.389721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.389746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.403292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.403332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.080 [2024-07-24 19:58:16.412456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.080 [2024-07-24 19:58:16.412483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.081 [2024-07-24 19:58:16.423972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.081 [2024-07-24 19:58:16.424006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.081 [2024-07-24 19:58:16.433098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.081 [2024-07-24 19:58:16.433125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.081 [2024-07-24 19:58:16.444059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.081 [2024-07-24 19:58:16.444086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.081 [2024-07-24 19:58:16.454557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.081 [2024-07-24 19:58:16.454584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.466895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.466922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.476160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.476187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.487576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.487602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.498366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.498392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.513953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.513978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.531116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.531142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.540479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.540505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.552155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.552179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.561577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.561603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.574985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.575012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.584558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.584585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.595558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.595582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.605930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.605956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.621264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.621294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.630280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.630307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.641579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.641604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.658847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.658874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.673902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.673928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.690974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.691000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.700741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.700767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.339 [2024-07-24 19:58:16.712086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.339 [2024-07-24 19:58:16.712110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.597 [2024-07-24 19:58:16.721572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.597 [2024-07-24 19:58:16.721598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.597 [2024-07-24 19:58:16.735122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.597 [2024-07-24 19:58:16.735148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.597 [2024-07-24 19:58:16.744838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.597 [2024-07-24 19:58:16.744863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.597 [2024-07-24 19:58:16.755956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.597 [2024-07-24 19:58:16.755982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.597 [2024-07-24 19:58:16.766497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.597 [2024-07-24 19:58:16.766523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.597 [2024-07-24 19:58:16.778661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.597 [2024-07-24 19:58:16.778687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.597 [2024-07-24 19:58:16.795202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.597 [2024-07-24 19:58:16.795250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.597 [2024-07-24 19:58:16.804390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.597 [2024-07-24 19:58:16.804416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.597 [2024-07-24 19:58:16.816006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.597 [2024-07-24 19:58:16.816032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.597 [2024-07-24 19:58:16.825647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.597 [2024-07-24 19:58:16.825674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.597 [2024-07-24 19:58:16.840836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.597 [2024-07-24 19:58:16.840860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.597 [2024-07-24 19:58:16.850034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.597 [2024-07-24 19:58:16.850059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.597 [2024-07-24 19:58:16.863185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.597 [2024-07-24 19:58:16.863212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.597 [2024-07-24 19:58:16.872564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.598 [2024-07-24 19:58:16.872589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.598 [2024-07-24 19:58:16.883714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.598 [2024-07-24 19:58:16.883737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.598 [2024-07-24 19:58:16.893362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.598 [2024-07-24 19:58:16.893388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.598 [2024-07-24 19:58:16.904657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.598 [2024-07-24 19:58:16.904686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.598 [2024-07-24 19:58:16.922509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.598 [2024-07-24 19:58:16.922559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.598 [2024-07-24 19:58:16.931964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.598 [2024-07-24 19:58:16.932000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.598 [2024-07-24 19:58:16.943826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.598 [2024-07-24 19:58:16.943855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.598 [2024-07-24 19:58:16.954982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.598 [2024-07-24 19:58:16.955011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.598 [2024-07-24 19:58:16.964978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.598 [2024-07-24 19:58:16.965007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.855 [2024-07-24 19:58:16.979614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.855 [2024-07-24 19:58:16.979643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.855 [2024-07-24 19:58:16.989080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.855 [2024-07-24 19:58:16.989109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.855 [2024-07-24 19:58:17.004057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.855 [2024-07-24 19:58:17.004083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.855 [2024-07-24 19:58:17.012615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.855 [2024-07-24 19:58:17.012641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.855 [2024-07-24 19:58:17.024793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.855 [2024-07-24 19:58:17.024822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.855 [2024-07-24 19:58:17.042690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.855 [2024-07-24 19:58:17.042719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.855 [2024-07-24 19:58:17.052178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.855 [2024-07-24 19:58:17.052203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.855 [2024-07-24 19:58:17.064065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.855 [2024-07-24 19:58:17.064094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.855 [2024-07-24 19:58:17.073893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.855 [2024-07-24 19:58:17.073922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.855 [2024-07-24 19:58:17.087589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.855 [2024-07-24 19:58:17.087615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.856 [2024-07-24 19:58:17.097335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.856 [2024-07-24 19:58:17.097360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.856 [2024-07-24 19:58:17.112175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.856 [2024-07-24 19:58:17.112217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.856 [2024-07-24 19:58:17.121900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.856 [2024-07-24 19:58:17.121924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.856 [2024-07-24 19:58:17.135118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.856 [2024-07-24 19:58:17.135144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.856 [2024-07-24 19:58:17.144897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.856 [2024-07-24 19:58:17.144925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.856 [2024-07-24 19:58:17.160269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.856 [2024-07-24 19:58:17.160303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.856 [2024-07-24 19:58:17.169479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.856 [2024-07-24 19:58:17.169505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.856 [2024-07-24 19:58:17.184326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.856 [2024-07-24 19:58:17.184352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.856 [2024-07-24 19:58:17.193565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.856 [2024-07-24 19:58:17.193605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.856 [2024-07-24 19:58:17.207073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.856 [2024-07-24 19:58:17.207099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.856 [2024-07-24 19:58:17.216873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.856 [2024-07-24 19:58:17.216902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:28:59.856 [2024-07-24 19:58:17.232059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:28:59.856 [2024-07-24 19:58:17.232100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.241275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.241301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.254970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.254996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.264366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.264391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.275833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.275862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.286356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.286382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.301901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.301928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.317013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.317039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.326853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.326878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.338357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.338384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.348136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.348163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.359330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.359356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.369802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.369831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.385556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.385605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.401405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.401431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.419690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.419718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.430579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.113 [2024-07-24 19:58:17.430606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.113 [2024-07-24 19:58:17.441231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.114 [2024-07-24 19:58:17.441270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.114 [2024-07-24 19:58:17.459538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.114 [2024-07-24 19:58:17.459568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.114 [2024-07-24 19:58:17.469334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.114 [2024-07-24 19:58:17.469360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.114 [2024-07-24 19:58:17.480471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.114 [2024-07-24 19:58:17.480497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.114 [2024-07-24 19:58:17.490033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.114 [2024-07-24 19:58:17.490070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.371 [2024-07-24 19:58:17.503770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.371 [2024-07-24 19:58:17.503796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.371 [2024-07-24 19:58:17.512917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.371 [2024-07-24 19:58:17.512946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.371 [2024-07-24 19:58:17.528787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.371 [2024-07-24 19:58:17.528817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.371 [2024-07-24 19:58:17.537762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.371 [2024-07-24 19:58:17.537791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.371 [2024-07-24 19:58:17.549019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.371 [2024-07-24 19:58:17.549049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.371 [2024-07-24 19:58:17.562360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.371 [2024-07-24 19:58:17.562387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.371 [2024-07-24 19:58:17.577870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.372 [2024-07-24 19:58:17.577900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.372 [2024-07-24 19:58:17.595169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.372 [2024-07-24 19:58:17.595198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.372 [2024-07-24 19:58:17.603864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.372 [2024-07-24 19:58:17.603891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.372 [2024-07-24 19:58:17.615609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.372 [2024-07-24 19:58:17.615635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.372 [2024-07-24 19:58:17.626557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.372 [2024-07-24 19:58:17.626605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.372 [2024-07-24 19:58:17.636045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.372 [2024-07-24 19:58:17.636072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.372 [2024-07-24 19:58:17.648121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.372 [2024-07-24 19:58:17.648146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.372 [2024-07-24 19:58:17.658404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.372 [2024-07-24 19:58:17.658430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.372 [2024-07-24 19:58:17.672180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.372 [2024-07-24 19:58:17.672210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.372 [2024-07-24 19:58:17.681438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.372 [2024-07-24 19:58:17.681463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.372 [2024-07-24 19:58:17.695550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.372 [2024-07-24 19:58:17.695576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.372 [2024-07-24 19:58:17.704936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.372 [2024-07-24 19:58:17.704965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.372 [2024-07-24 19:58:17.720224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.372 [2024-07-24 19:58:17.720262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.372 [2024-07-24 19:58:17.729625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.372 [2024-07-24 19:58:17.729654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.372 [2024-07-24 19:58:17.743125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.372 [2024-07-24 19:58:17.743155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.752381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.752411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.763848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.763878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.774592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.774618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.787613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.787653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.796945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.796971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.811116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.811145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.820411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.820438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.832382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.832407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.842357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.842391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.853890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.853916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.868704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.868731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.878389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.878415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.890427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.890453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.900327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.900354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.912452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.912479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.922432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.922458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.935590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.935617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.945029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.945054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.959675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.959701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.968526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.968567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.979508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.979548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:17.989801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:17.989827] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.631 [2024-07-24 19:58:18.002122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.631 [2024-07-24 19:58:18.002148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.018844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.018870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.028062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.028089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.039285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.039311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.049732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.049756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.065786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.065810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.082489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.082515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.092196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.092222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.103391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.103417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.113671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.113695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.129311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.129337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.147042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.147067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.156468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.156494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.167433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.167459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.177405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.177430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.191627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.191653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.200316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.200342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.210981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.211020] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.219988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.220014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.231019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.231045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.241025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.241049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.256305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.256332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:00.889 [2024-07-24 19:58:18.265396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:00.889 [2024-07-24 19:58:18.265422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.279931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.279973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.289159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.289184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.303286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.303312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.312992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.313017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.323826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.323850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.333377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.333405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.346966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.346992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.358424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.358450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.372795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.372822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.381916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.381942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.393163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.393202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.402681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.402707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.415324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.415350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.424368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.424395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.435344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.435371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.445425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.445451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.459898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.459924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.468951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.468978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.480195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.480235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.489786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.489810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.503150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.503176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.512694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.512718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.147 [2024-07-24 19:58:18.523608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.147 [2024-07-24 19:58:18.523632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.533704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.533731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.546460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.546486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.562717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.562744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.572144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.572170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.583421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.583447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.593920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.593945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.610666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.610690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.619927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.619953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.630821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.630847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.644148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.644175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.652815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.652841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.664198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.664237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.673541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.673566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.688034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.688074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.697209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.697236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.711510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.711546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.720419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.720446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.731383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.731410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.741109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.741136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.755087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.755116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.764051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.764078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.405 [2024-07-24 19:58:18.775016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.405 [2024-07-24 19:58:18.775043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.663 [2024-07-24 19:58:18.785391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.663 [2024-07-24 19:58:18.785418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.663 [2024-07-24 19:58:18.798768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.663 [2024-07-24 19:58:18.798795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.663 [2024-07-24 19:58:18.812536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.663 [2024-07-24 19:58:18.812563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.663 [2024-07-24 19:58:18.821905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.663 [2024-07-24 19:58:18.821931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.663 [2024-07-24 19:58:18.833209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.663 [2024-07-24 19:58:18.833260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.663 [2024-07-24 19:58:18.842636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.663 [2024-07-24 19:58:18.842660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.663 [2024-07-24 19:58:18.855571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.663 [2024-07-24 19:58:18.855596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.663 [2024-07-24 19:58:18.864480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.663 [2024-07-24 19:58:18.864506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.663 [2024-07-24 19:58:18.875569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.663 [2024-07-24 19:58:18.875594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.663 [2024-07-24 19:58:18.885765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.663 [2024-07-24 19:58:18.885790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.663 [2024-07-24 19:58:18.898077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.663 [2024-07-24 19:58:18.898104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.663 [2024-07-24 19:58:18.915316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.663 [2024-07-24 19:58:18.915343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.664 [2024-07-24 19:58:18.924705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.664 [2024-07-24 19:58:18.924740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.664 [2024-07-24 19:58:18.940516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.664 [2024-07-24 19:58:18.940543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.664 [2024-07-24 19:58:18.959950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.664 [2024-07-24 19:58:18.959977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.664 [2024-07-24 19:58:18.969604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.664 [2024-07-24 19:58:18.969628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.664 [2024-07-24 19:58:18.983092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.664 [2024-07-24 19:58:18.983121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.664 [2024-07-24 19:58:18.991919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.664 [2024-07-24 19:58:18.991943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.664 [2024-07-24 19:58:19.003038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.664 [2024-07-24 19:58:19.003077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.664 [2024-07-24 19:58:19.013094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.664 [2024-07-24 19:58:19.013121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.664 [2024-07-24 19:58:19.027140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.664 [2024-07-24 19:58:19.027169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.664 [2024-07-24 19:58:19.036510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.664 [2024-07-24 19:58:19.036536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.048386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.048410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.058261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.058308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.072153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.072182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.081340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.081365] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.096717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.096746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.106338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.106378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.120074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.120103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.129357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.129384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.142805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.142834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.154616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.154654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.166653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.166679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.176463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.176490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.187865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.187894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.203619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.203648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.212979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.213005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.224952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.224981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.243954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.243980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.255227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.255266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 00:29:01.922 Latency(us) 00:29:01.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.922 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:29:01.922 Nvme1n1 : 5.01 12062.54 94.24 0.00 0.00 10599.33 3021.94 17864.63 00:29:01.922 =================================================================================================================== 00:29:01.922 Total : 12062.54 94.24 0.00 0.00 10599.33 3021.94 17864.63 00:29:01.922 [2024-07-24 19:58:19.263219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.263256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.271209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.271236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.279207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.279234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.287249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.287309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:01.922 [2024-07-24 19:58:19.295236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:01.922 [2024-07-24 19:58:19.295285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.303240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.303286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.311239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.311287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.319239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.319305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.327236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.327286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.335234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.335282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.343239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.343286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.351248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.351290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.359250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.359294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.367237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.367287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.375251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.375294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.383236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.383283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.391234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.391282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.399219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.399262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.407202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.407226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.415204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.415227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.423202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.423226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.431202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.431225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.439232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.439279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.447258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.447299] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.455229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.180 [2024-07-24 19:58:19.455275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.180 [2024-07-24 19:58:19.463201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.181 [2024-07-24 19:58:19.463224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.181 [2024-07-24 19:58:19.471201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.181 [2024-07-24 19:58:19.471225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.181 [2024-07-24 19:58:19.479205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.181 [2024-07-24 19:58:19.479229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.181 [2024-07-24 19:58:19.487201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.181 [2024-07-24 19:58:19.487225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.181 [2024-07-24 19:58:19.495231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.181 [2024-07-24 19:58:19.495282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.181 [2024-07-24 19:58:19.503239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.181 [2024-07-24 19:58:19.503292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.181 [2024-07-24 19:58:19.511227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.181 [2024-07-24 19:58:19.511270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.181 [2024-07-24 19:58:19.519202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.181 [2024-07-24 19:58:19.519225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.181 [2024-07-24 19:58:19.527187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.181 [2024-07-24 19:58:19.527207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.181 [2024-07-24 19:58:19.535201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:29:02.181 [2024-07-24 19:58:19.535225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:02.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1322707) - No such process 00:29:02.181 19:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1322707 00:29:02.181 19:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.181 19:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:02.181 19:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:02.181 19:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:02.181 19:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:02.181 19:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:02.181 19:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:02.181 delay0 00:29:02.181 19:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:02.181 19:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:29:02.181 19:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:02.181 19:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:02.439 19:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:02.439 19:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:29:02.439 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.439 [2024-07-24 19:58:19.691396] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:10.558 Initializing NVMe Controllers 00:29:10.558 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:10.558 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:10.558 Initialization complete. Launching workers. 00:29:10.558 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 232, failed: 25716 00:29:10.558 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 25811, failed to submit 137 00:29:10.558 success 25726, unsuccess 85, failed 0 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # nvmfcleanup 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:10.558 rmmod nvme_tcp 00:29:10.558 rmmod nvme_fabrics 00:29:10.558 rmmod nvme_keyring 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # '[' -n 1321393 ']' 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@494 -- # killprocess 1321393 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' -z 1321393 ']' 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@955 -- # kill -0 1321393 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # uname 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1321393 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1321393' 00:29:10.558 killing process with pid 1321393 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # kill 1321393 00:29:10.558 19:58:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@975 -- # wait 1321393 00:29:10.558 19:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:29:10.558 19:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:29:10.558 19:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:29:10.558 19:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:10.558 19:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@282 -- # remove_spdk_ns 00:29:10.558 19:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.558 19:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.558 19:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:11.946 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:29:11.946 00:29:11.946 real 0m28.840s 00:29:11.946 user 0m40.382s 00:29:11.946 sys 0m10.807s 00:29:11.946 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # xtrace_disable 00:29:11.946 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:29:11.946 ************************************ 00:29:11.946 END TEST nvmf_zcopy 00:29:11.946 ************************************ 00:29:11.946 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:11.946 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:29:11.946 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1108 -- # xtrace_disable 00:29:11.946 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:11.946 ************************************ 00:29:11.946 START TEST nvmf_nmic 00:29:11.946 ************************************ 00:29:11.946 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:29:12.203 * Looking for test storage... 00:29:12.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@452 -- # prepare_net_devs 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@414 -- # local -g is_hw=no 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # remove_spdk_ns 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:29:12.203 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:29:12.204 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@289 -- # xtrace_disable 00:29:12.204 19:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@295 -- # pci_devs=() 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@295 -- # local -a pci_devs 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@296 -- # pci_net_devs=() 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # pci_drivers=() 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # local -A pci_drivers 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # net_devs=() 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # local -ga net_devs 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # e810=() 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # local -ga e810 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@301 -- # x722=() 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@301 -- # local -ga x722 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # mlx=() 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # local -ga mlx 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:14.103 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:14.103 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@394 -- # [[ up == up ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:14.103 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@394 -- # [[ up == up ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:14.103 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # is_hw=yes 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:29:14.103 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.360 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:29:14.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:29:14.361 00:29:14.361 --- 10.0.0.2 ping statistics --- 00:29:14.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.361 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:29:14.361 00:29:14.361 --- 10.0.0.1 ping statistics --- 00:29:14.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.361 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@426 -- # return 0 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@725 -- # xtrace_disable 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@485 -- # nvmfpid=1326191 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@486 -- # waitforlisten 1326191 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@832 -- # '[' -z 1326191 ']' 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local max_retries=100 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@841 -- # xtrace_disable 00:29:14.361 19:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:14.361 [2024-07-24 19:58:31.580970] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:14.361 [2024-07-24 19:58:31.582255] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:29:14.361 [2024-07-24 19:58:31.582318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.361 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.361 [2024-07-24 19:58:31.658715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:14.620 [2024-07-24 19:58:31.787343] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.620 [2024-07-24 19:58:31.787390] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.620 [2024-07-24 19:58:31.787406] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.620 [2024-07-24 19:58:31.787420] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.620 [2024-07-24 19:58:31.787431] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.620 [2024-07-24 19:58:31.787487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.620 [2024-07-24 19:58:31.787548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:14.620 [2024-07-24 19:58:31.787597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.620 [2024-07-24 19:58:31.787593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:14.620 [2024-07-24 19:58:31.897235] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:14.620 [2024-07-24 19:58:31.897494] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:14.620 [2024-07-24 19:58:31.897758] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:14.620 [2024-07-24 19:58:31.898429] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:14.620 [2024-07-24 19:58:31.898691] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:15.593 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:29:15.593 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@865 -- # return 0 00:29:15.593 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:29:15.593 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@731 -- # xtrace_disable 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:15.594 [2024-07-24 19:58:32.604315] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:15.594 Malloc0 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:15.594 [2024-07-24 19:58:32.660525] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:29:15.594 test case1: single bdev can't be used in multiple subsystems 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:15.594 [2024-07-24 19:58:32.684235] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:29:15.594 [2024-07-24 19:58:32.684275] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:29:15.594 [2024-07-24 19:58:32.684317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:29:15.594 request: 00:29:15.594 { 00:29:15.594 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:29:15.594 "namespace": { 00:29:15.594 "bdev_name": "Malloc0", 00:29:15.594 "no_auto_visible": false 00:29:15.594 }, 00:29:15.594 "method": "nvmf_subsystem_add_ns", 00:29:15.594 "req_id": 1 00:29:15.594 } 00:29:15.594 Got JSON-RPC error response 00:29:15.594 response: 00:29:15.594 { 00:29:15.594 "code": -32602, 00:29:15.594 "message": "Invalid parameters" 00:29:15.594 } 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:29:15.594 Adding namespace failed - expected result. 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:29:15.594 test case2: host connect to nvmf target in multiple paths 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:15.594 [2024-07-24 19:58:32.692344] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:15.594 19:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:29:15.851 19:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:29:15.851 19:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1199 -- # local i=0 00:29:15.851 19:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:29:15.851 19:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # [[ -n '' ]] 00:29:15.851 19:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1206 -- # sleep 2 00:29:17.748 19:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:29:17.748 19:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:29:17.748 19:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:29:17.748 19:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # nvme_devices=1 00:29:17.748 19:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:29:17.748 19:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # return 0 00:29:17.748 19:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:17.748 [global] 00:29:17.748 thread=1 00:29:17.748 invalidate=1 00:29:17.748 rw=write 00:29:17.748 time_based=1 00:29:17.748 runtime=1 00:29:17.748 ioengine=libaio 00:29:17.748 direct=1 00:29:17.748 bs=4096 00:29:17.748 iodepth=1 00:29:17.748 norandommap=0 00:29:17.748 numjobs=1 00:29:17.748 00:29:17.748 verify_dump=1 00:29:17.748 verify_backlog=512 00:29:17.748 verify_state_save=0 00:29:17.748 do_verify=1 00:29:17.748 verify=crc32c-intel 00:29:17.748 [job0] 00:29:17.748 filename=/dev/nvme0n1 00:29:17.748 Could not set queue depth (nvme0n1) 00:29:18.005 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:18.005 fio-3.35 00:29:18.005 Starting 1 thread 00:29:19.376 00:29:19.376 job0: (groupid=0, jobs=1): err= 0: pid=1326707: Wed Jul 24 19:58:36 2024 00:29:19.376 read: IOPS=23, BW=92.8KiB/s (95.1kB/s)(96.0KiB/1034msec) 00:29:19.376 slat (nsec): min=7206, max=43911, avg=20062.54, stdev=9311.25 00:29:19.376 clat (usec): min=361, max=41114, avg=39268.76, stdev=8287.74 00:29:19.376 lat (usec): min=383, max=41121, avg=39288.82, stdev=8287.25 00:29:19.376 clat percentiles (usec): 00:29:19.376 | 1.00th=[ 363], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:29:19.376 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:19.376 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:19.376 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:19.376 | 99.99th=[41157] 00:29:19.376 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:29:19.376 slat (nsec): min=5552, max=26598, avg=6838.80, stdev=2086.44 00:29:19.376 clat (usec): min=142, max=373, avg=169.05, stdev=17.18 00:29:19.376 lat (usec): min=148, max=380, avg=175.89, stdev=17.69 00:29:19.376 clat percentiles (usec): 00:29:19.376 | 1.00th=[ 147], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:29:19.376 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 174], 00:29:19.376 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 192], 00:29:19.376 | 99.00th=[ 204], 99.50th=[ 225], 99.90th=[ 375], 99.95th=[ 375], 00:29:19.376 | 99.99th=[ 375] 00:29:19.376 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:29:19.376 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:19.376 lat (usec) : 250=95.15%, 500=0.56% 00:29:19.376 lat (msec) : 50=4.29% 00:29:19.376 cpu : usr=0.29%, sys=0.19%, ctx=536, majf=0, minf=2 00:29:19.376 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:19.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:19.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:19.376 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:19.376 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:19.376 00:29:19.376 Run status group 0 (all jobs): 00:29:19.376 READ: bw=92.8KiB/s (95.1kB/s), 92.8KiB/s-92.8KiB/s (95.1kB/s-95.1kB/s), io=96.0KiB (98.3kB), run=1034-1034msec 00:29:19.376 WRITE: bw=1981KiB/s (2028kB/s), 1981KiB/s-1981KiB/s (2028kB/s-2028kB/s), io=2048KiB (2097kB), run=1034-1034msec 00:29:19.376 00:29:19.376 Disk stats (read/write): 00:29:19.376 nvme0n1: ios=70/512, merge=0/0, ticks=887/77, in_queue=964, util=95.79% 00:29:19.376 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:19.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:29:19.376 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:19.376 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1220 -- # local i=0 00:29:19.376 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # lsblk -o NAME,SERIAL 00:29:19.376 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:19.376 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME,SERIAL 00:29:19.376 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1228 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:19.376 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1232 -- # return 0 00:29:19.376 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # nvmfcleanup 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:19.377 rmmod nvme_tcp 00:29:19.377 rmmod nvme_fabrics 00:29:19.377 rmmod nvme_keyring 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # '[' -n 1326191 ']' 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@494 -- # killprocess 1326191 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' -z 1326191 ']' 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@955 -- # kill -0 1326191 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # uname 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1326191 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1326191' 00:29:19.377 killing process with pid 1326191 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # kill 1326191 00:29:19.377 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@975 -- # wait 1326191 00:29:19.635 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:29:19.635 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:29:19.635 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:29:19.635 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:19.635 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@282 -- # remove_spdk_ns 00:29:19.635 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.635 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.635 19:58:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:29:22.165 00:29:22.165 real 0m9.709s 00:29:22.165 user 0m16.570s 00:29:22.165 sys 0m3.660s 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # xtrace_disable 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:29:22.165 ************************************ 00:29:22.165 END TEST nvmf_nmic 00:29:22.165 ************************************ 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1108 -- # xtrace_disable 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:22.165 ************************************ 00:29:22.165 START TEST nvmf_fio_target 00:29:22.165 ************************************ 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:29:22.165 * Looking for test storage... 00:29:22.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@452 -- # prepare_net_devs 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@414 -- # local -g is_hw=no 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # remove_spdk_ns 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@289 -- # xtrace_disable 00:29:22.165 19:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@295 -- # pci_devs=() 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@295 -- # local -a pci_devs 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@296 -- # pci_net_devs=() 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # pci_drivers=() 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # local -A pci_drivers 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # net_devs=() 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # local -ga net_devs 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # e810=() 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # local -ga e810 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@301 -- # x722=() 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@301 -- # local -ga x722 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # mlx=() 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # local -ga mlx 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:24.068 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:24.068 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@394 -- # [[ up == up ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:24.068 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@394 -- # [[ up == up ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:24.068 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # is_hw=yes 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.068 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:29:24.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:29:24.069 00:29:24.069 --- 10.0.0.2 ping statistics --- 00:29:24.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.069 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:29:24.069 00:29:24.069 --- 10.0.0.1 ping statistics --- 00:29:24.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.069 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@426 -- # return 0 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@725 -- # xtrace_disable 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@485 -- # nvmfpid=1328774 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@486 -- # waitforlisten 1328774 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@832 -- # '[' -z 1328774 ']' 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local max_retries=100 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@841 -- # xtrace_disable 00:29:24.069 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.069 [2024-07-24 19:58:41.223463] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:24.069 [2024-07-24 19:58:41.224566] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:29:24.069 [2024-07-24 19:58:41.224630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.069 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.069 [2024-07-24 19:58:41.287220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:24.069 [2024-07-24 19:58:41.394036] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.069 [2024-07-24 19:58:41.394080] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.069 [2024-07-24 19:58:41.394106] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.069 [2024-07-24 19:58:41.394118] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.069 [2024-07-24 19:58:41.394128] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.069 [2024-07-24 19:58:41.394214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.069 [2024-07-24 19:58:41.394285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.069 [2024-07-24 19:58:41.394351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.069 [2024-07-24 19:58:41.394354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.327 [2024-07-24 19:58:41.500190] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:24.327 [2024-07-24 19:58:41.500432] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:24.327 [2024-07-24 19:58:41.500702] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:24.327 [2024-07-24 19:58:41.501331] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:24.327 [2024-07-24 19:58:41.501594] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:24.327 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:29:24.327 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@865 -- # return 0 00:29:24.327 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:29:24.327 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@731 -- # xtrace_disable 00:29:24.327 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:24.327 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.327 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:24.586 [2024-07-24 19:58:41.787120] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.586 19:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:24.845 19:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:29:24.845 19:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:25.107 19:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:29:25.107 19:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:25.365 19:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:29:25.365 19:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:25.930 19:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:29:25.930 19:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:29:25.930 19:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:26.497 19:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:29:26.497 19:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:26.497 19:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:29:26.497 19:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:27.063 19:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:29:27.063 19:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:29:27.063 19:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:29:27.320 19:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:27.320 19:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:27.578 19:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:29:27.578 19:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:27.835 19:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:28.093 [2024-07-24 19:58:45.415205] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.093 19:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:29:28.350 19:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:29:28.607 19:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:29:28.865 19:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:29:28.865 19:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local i=0 00:29:28.865 19:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local nvme_device_counter=1 nvme_devices=0 00:29:28.865 19:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # [[ -n 4 ]] 00:29:28.865 19:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # nvme_device_counter=4 00:29:28.865 19:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1206 -- # sleep 2 00:29:30.793 19:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # (( i++ <= 15 )) 00:29:30.793 19:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # lsblk -l -o NAME,SERIAL 00:29:30.793 19:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # grep -c SPDKISFASTANDAWESOME 00:29:30.793 19:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # nvme_devices=4 00:29:30.793 19:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # (( nvme_devices == nvme_device_counter )) 00:29:30.793 19:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # return 0 00:29:30.793 19:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:29:30.793 [global] 00:29:30.793 thread=1 00:29:30.793 invalidate=1 00:29:30.793 rw=write 00:29:30.793 time_based=1 00:29:30.793 runtime=1 00:29:30.793 ioengine=libaio 00:29:30.793 direct=1 00:29:30.793 bs=4096 00:29:30.793 iodepth=1 00:29:30.793 norandommap=0 00:29:30.793 numjobs=1 00:29:30.793 00:29:30.793 verify_dump=1 00:29:30.793 verify_backlog=512 00:29:30.793 verify_state_save=0 00:29:30.793 do_verify=1 00:29:30.793 verify=crc32c-intel 00:29:30.793 [job0] 00:29:30.793 filename=/dev/nvme0n1 00:29:30.793 [job1] 00:29:30.793 filename=/dev/nvme0n2 00:29:30.794 [job2] 00:29:30.794 filename=/dev/nvme0n3 00:29:30.794 [job3] 00:29:30.794 filename=/dev/nvme0n4 00:29:30.794 Could not set queue depth (nvme0n1) 00:29:30.794 Could not set queue depth (nvme0n2) 00:29:30.794 Could not set queue depth (nvme0n3) 00:29:30.794 Could not set queue depth (nvme0n4) 00:29:31.051 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:31.051 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:31.051 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:31.051 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:31.051 fio-3.35 00:29:31.051 Starting 4 threads 00:29:32.422 00:29:32.423 job0: (groupid=0, jobs=1): err= 0: pid=1329714: Wed Jul 24 19:58:49 2024 00:29:32.423 read: IOPS=1190, BW=4763KiB/s (4878kB/s)(4768KiB/1001msec) 00:29:32.423 slat (nsec): min=5237, max=66855, avg=20923.34, stdev=10704.51 00:29:32.423 clat (usec): min=244, max=41976, avg=498.06, stdev=2059.69 00:29:32.423 lat (usec): min=258, max=41991, avg=518.98, stdev=2060.30 00:29:32.423 clat percentiles (usec): 00:29:32.423 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 277], 20.00th=[ 293], 00:29:32.423 | 30.00th=[ 314], 40.00th=[ 334], 50.00th=[ 392], 60.00th=[ 424], 00:29:32.423 | 70.00th=[ 457], 80.00th=[ 486], 90.00th=[ 529], 95.00th=[ 570], 00:29:32.423 | 99.00th=[ 693], 99.50th=[ 775], 99.90th=[41157], 99.95th=[42206], 00:29:32.423 | 99.99th=[42206] 00:29:32.423 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:29:32.423 slat (usec): min=6, max=32893, avg=32.47, stdev=839.03 00:29:32.423 clat (usec): min=156, max=405, avg=207.46, stdev=29.09 00:29:32.423 lat (usec): min=165, max=33215, avg=239.93, stdev=842.44 00:29:32.423 clat percentiles (usec): 00:29:32.423 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 184], 00:29:32.423 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 208], 00:29:32.423 | 70.00th=[ 221], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 255], 00:29:32.423 | 99.00th=[ 289], 99.50th=[ 322], 99.90th=[ 367], 99.95th=[ 404], 00:29:32.423 | 99.99th=[ 404] 00:29:32.423 bw ( KiB/s): min= 8192, max= 8192, per=45.13%, avg=8192.00, stdev= 0.00, samples=1 00:29:32.423 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:32.423 lat (usec) : 250=52.82%, 500=40.32%, 750=6.60%, 1000=0.15% 00:29:32.423 lat (msec) : 50=0.11% 00:29:32.423 cpu : usr=1.90%, sys=4.80%, ctx=2731, majf=0, minf=1 00:29:32.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:32.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.423 issued rwts: total=1192,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:32.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:32.423 job1: (groupid=0, jobs=1): err= 0: pid=1329715: Wed Jul 24 19:58:49 2024 00:29:32.423 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:29:32.423 slat (nsec): min=5421, max=48656, avg=12193.60, stdev=6341.45 00:29:32.423 clat (usec): min=244, max=41921, avg=719.13, stdev=3817.60 00:29:32.423 lat (usec): min=250, max=41928, avg=731.32, stdev=3817.90 00:29:32.423 clat percentiles (usec): 00:29:32.423 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 302], 00:29:32.423 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 347], 60.00th=[ 367], 00:29:32.423 | 70.00th=[ 383], 80.00th=[ 408], 90.00th=[ 474], 95.00th=[ 515], 00:29:32.423 | 99.00th=[ 635], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:29:32.423 | 99.99th=[41681] 00:29:32.423 write: IOPS=1056, BW=4228KiB/s (4329kB/s)(4232KiB/1001msec); 0 zone resets 00:29:32.423 slat (nsec): min=5954, max=51415, avg=14589.42, stdev=7187.46 00:29:32.423 clat (usec): min=172, max=355, avg=214.91, stdev=25.08 00:29:32.423 lat (usec): min=179, max=370, avg=229.50, stdev=28.32 00:29:32.423 clat percentiles (usec): 00:29:32.423 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 196], 00:29:32.423 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 217], 00:29:32.423 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 249], 95.00th=[ 265], 00:29:32.423 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 351], 99.95th=[ 355], 00:29:32.423 | 99.99th=[ 355] 00:29:32.423 bw ( KiB/s): min= 4096, max= 4096, per=22.57%, avg=4096.00, stdev= 0.00, samples=1 00:29:32.423 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:32.423 lat (usec) : 250=46.25%, 500=50.53%, 750=2.79% 00:29:32.423 lat (msec) : 50=0.43% 00:29:32.423 cpu : usr=1.70%, sys=4.10%, ctx=2083, majf=0, minf=1 00:29:32.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:32.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.423 issued rwts: total=1024,1058,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:32.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:32.423 job2: (groupid=0, jobs=1): err= 0: pid=1329727: Wed Jul 24 19:58:49 2024 00:29:32.423 read: IOPS=21, BW=86.0KiB/s (88.1kB/s)(88.0KiB/1023msec) 00:29:32.423 slat (nsec): min=12922, max=37548, avg=23467.59, stdev=10198.14 00:29:32.423 clat (usec): min=40676, max=41092, avg=40952.80, stdev=84.26 00:29:32.423 lat (usec): min=40695, max=41129, avg=40976.27, stdev=84.01 00:29:32.423 clat percentiles (usec): 00:29:32.423 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:29:32.423 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:32.423 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:32.423 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:32.423 | 99.99th=[41157] 00:29:32.423 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:29:32.423 slat (nsec): min=7079, max=53849, avg=11580.54, stdev=6411.21 00:29:32.423 clat (usec): min=173, max=394, avg=220.95, stdev=24.60 00:29:32.423 lat (usec): min=180, max=402, avg=232.53, stdev=26.28 00:29:32.423 clat percentiles (usec): 00:29:32.423 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 194], 20.00th=[ 202], 00:29:32.423 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:29:32.423 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 249], 95.00th=[ 265], 00:29:32.423 | 99.00th=[ 297], 99.50th=[ 314], 99.90th=[ 396], 99.95th=[ 396], 00:29:32.423 | 99.99th=[ 396] 00:29:32.423 bw ( KiB/s): min= 4096, max= 4096, per=22.57%, avg=4096.00, stdev= 0.00, samples=1 00:29:32.423 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:29:32.423 lat (usec) : 250=86.52%, 500=9.36% 00:29:32.423 lat (msec) : 50=4.12% 00:29:32.423 cpu : usr=0.49%, sys=0.68%, ctx=535, majf=0, minf=2 00:29:32.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:32.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.423 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:32.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:32.423 job3: (groupid=0, jobs=1): err= 0: pid=1329733: Wed Jul 24 19:58:49 2024 00:29:32.423 read: IOPS=1467, BW=5870KiB/s (6011kB/s)(5876KiB/1001msec) 00:29:32.423 slat (nsec): min=5560, max=52605, avg=13365.44, stdev=6386.44 00:29:32.423 clat (usec): min=255, max=41913, avg=412.49, stdev=1508.49 00:29:32.423 lat (usec): min=261, max=41932, avg=425.86, stdev=1508.77 00:29:32.423 clat percentiles (usec): 00:29:32.423 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 306], 00:29:32.423 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 351], 00:29:32.423 | 70.00th=[ 375], 80.00th=[ 404], 90.00th=[ 465], 95.00th=[ 498], 00:29:32.423 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[40633], 99.95th=[41681], 00:29:32.423 | 99.99th=[41681] 00:29:32.423 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:29:32.423 slat (nsec): min=7233, max=57120, avg=14725.75, stdev=7656.82 00:29:32.423 clat (usec): min=160, max=389, avg=221.21, stdev=29.77 00:29:32.423 lat (usec): min=168, max=399, avg=235.93, stdev=33.10 00:29:32.423 clat percentiles (usec): 00:29:32.423 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 196], 00:29:32.423 | 30.00th=[ 204], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 227], 00:29:32.423 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 262], 95.00th=[ 277], 00:29:32.423 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 371], 99.95th=[ 392], 00:29:32.423 | 99.99th=[ 392] 00:29:32.423 bw ( KiB/s): min= 6200, max= 6200, per=34.16%, avg=6200.00, stdev= 0.00, samples=1 00:29:32.423 iops : min= 1550, max= 1550, avg=1550.00, stdev= 0.00, samples=1 00:29:32.423 lat (usec) : 250=43.49%, 500=54.14%, 750=2.30% 00:29:32.423 lat (msec) : 50=0.07% 00:29:32.423 cpu : usr=2.70%, sys=6.00%, ctx=3007, majf=0, minf=1 00:29:32.423 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:32.423 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.423 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:32.423 issued rwts: total=1469,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:32.423 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:32.423 00:29:32.423 Run status group 0 (all jobs): 00:29:32.423 READ: bw=14.2MiB/s (14.8MB/s), 86.0KiB/s-5870KiB/s (88.1kB/s-6011kB/s), io=14.5MiB (15.2MB), run=1001-1023msec 00:29:32.423 WRITE: bw=17.7MiB/s (18.6MB/s), 2002KiB/s-6138KiB/s (2050kB/s-6285kB/s), io=18.1MiB (19.0MB), run=1001-1023msec 00:29:32.423 00:29:32.423 Disk stats (read/write): 00:29:32.423 nvme0n1: ios=1054/1059, merge=0/0, ticks=1508/222, in_queue=1730, util=97.49% 00:29:32.423 nvme0n2: ios=686/1024, merge=0/0, ticks=578/211, in_queue=789, util=86.35% 00:29:32.423 nvme0n3: ios=17/512, merge=0/0, ticks=697/105, in_queue=802, util=88.68% 00:29:32.423 nvme0n4: ios=1048/1451, merge=0/0, ticks=1403/316, in_queue=1719, util=97.67% 00:29:32.424 19:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:29:32.424 [global] 00:29:32.424 thread=1 00:29:32.424 invalidate=1 00:29:32.424 rw=randwrite 00:29:32.424 time_based=1 00:29:32.424 runtime=1 00:29:32.424 ioengine=libaio 00:29:32.424 direct=1 00:29:32.424 bs=4096 00:29:32.424 iodepth=1 00:29:32.424 norandommap=0 00:29:32.424 numjobs=1 00:29:32.424 00:29:32.424 verify_dump=1 00:29:32.424 verify_backlog=512 00:29:32.424 verify_state_save=0 00:29:32.424 do_verify=1 00:29:32.424 verify=crc32c-intel 00:29:32.424 [job0] 00:29:32.424 filename=/dev/nvme0n1 00:29:32.424 [job1] 00:29:32.424 filename=/dev/nvme0n2 00:29:32.424 [job2] 00:29:32.424 filename=/dev/nvme0n3 00:29:32.424 [job3] 00:29:32.424 filename=/dev/nvme0n4 00:29:32.424 Could not set queue depth (nvme0n1) 00:29:32.424 Could not set queue depth (nvme0n2) 00:29:32.424 Could not set queue depth (nvme0n3) 00:29:32.424 Could not set queue depth (nvme0n4) 00:29:32.424 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:32.424 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:32.424 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:32.424 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:32.424 fio-3.35 00:29:32.424 Starting 4 threads 00:29:33.797 00:29:33.797 job0: (groupid=0, jobs=1): err= 0: pid=1330060: Wed Jul 24 19:58:50 2024 00:29:33.797 read: IOPS=638, BW=2555KiB/s (2616kB/s)(2652KiB/1038msec) 00:29:33.797 slat (nsec): min=6085, max=15335, avg=6976.01, stdev=1295.81 00:29:33.797 clat (usec): min=228, max=41041, avg=1186.97, stdev=6049.30 00:29:33.797 lat (usec): min=235, max=41055, avg=1193.95, stdev=6050.29 00:29:33.797 clat percentiles (usec): 00:29:33.797 | 1.00th=[ 235], 5.00th=[ 243], 10.00th=[ 249], 20.00th=[ 255], 00:29:33.797 | 30.00th=[ 260], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:29:33.797 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 306], 00:29:33.797 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:33.797 | 99.99th=[41157] 00:29:33.797 write: IOPS=986, BW=3946KiB/s (4041kB/s)(4096KiB/1038msec); 0 zone resets 00:29:33.797 slat (nsec): min=7322, max=33295, avg=8707.94, stdev=1768.70 00:29:33.797 clat (usec): min=166, max=4185, avg=227.56, stdev=127.33 00:29:33.797 lat (usec): min=177, max=4196, avg=236.27, stdev=127.46 00:29:33.797 clat percentiles (usec): 00:29:33.797 | 1.00th=[ 176], 5.00th=[ 190], 10.00th=[ 200], 20.00th=[ 208], 00:29:33.797 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:29:33.797 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 247], 95.00th=[ 260], 00:29:33.797 | 99.00th=[ 302], 99.50th=[ 379], 99.90th=[ 725], 99.95th=[ 4178], 00:29:33.797 | 99.99th=[ 4178] 00:29:33.797 bw ( KiB/s): min= 8192, max= 8192, per=41.52%, avg=8192.00, stdev= 0.00, samples=1 00:29:33.797 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:33.797 lat (usec) : 250=60.76%, 500=38.11%, 750=0.18% 00:29:33.797 lat (msec) : 10=0.06%, 50=0.89% 00:29:33.797 cpu : usr=0.96%, sys=1.83%, ctx=1687, majf=0, minf=2 00:29:33.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:33.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.797 issued rwts: total=663,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:33.797 job1: (groupid=0, jobs=1): err= 0: pid=1330061: Wed Jul 24 19:58:50 2024 00:29:33.797 read: IOPS=1483, BW=5934KiB/s (6076kB/s)(5940KiB/1001msec) 00:29:33.797 slat (nsec): min=5008, max=30586, avg=10072.56, stdev=4323.54 00:29:33.797 clat (usec): min=214, max=40981, avg=433.15, stdev=2102.05 00:29:33.797 lat (usec): min=220, max=40992, avg=443.23, stdev=2102.18 00:29:33.797 clat percentiles (usec): 00:29:33.797 | 1.00th=[ 231], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 260], 00:29:33.797 | 30.00th=[ 269], 40.00th=[ 293], 50.00th=[ 314], 60.00th=[ 338], 00:29:33.797 | 70.00th=[ 363], 80.00th=[ 379], 90.00th=[ 396], 95.00th=[ 457], 00:29:33.797 | 99.00th=[ 603], 99.50th=[ 652], 99.90th=[41157], 99.95th=[41157], 00:29:33.797 | 99.99th=[41157] 00:29:33.797 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:29:33.797 slat (nsec): min=6430, max=24655, avg=8119.21, stdev=2117.10 00:29:33.797 clat (usec): min=143, max=663, avg=208.36, stdev=43.88 00:29:33.797 lat (usec): min=151, max=676, avg=216.47, stdev=44.35 00:29:33.797 clat percentiles (usec): 00:29:33.797 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 176], 00:29:33.797 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 198], 60.00th=[ 208], 00:29:33.797 | 70.00th=[ 223], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 262], 00:29:33.797 | 99.00th=[ 383], 99.50th=[ 396], 99.90th=[ 578], 99.95th=[ 660], 00:29:33.797 | 99.99th=[ 660] 00:29:33.797 bw ( KiB/s): min= 8192, max= 8192, per=41.52%, avg=8192.00, stdev= 0.00, samples=1 00:29:33.797 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:33.797 lat (usec) : 250=52.66%, 500=45.88%, 750=1.32% 00:29:33.797 lat (msec) : 50=0.13% 00:29:33.797 cpu : usr=1.70%, sys=2.60%, ctx=3022, majf=0, minf=1 00:29:33.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:33.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.797 issued rwts: total=1485,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:33.797 job2: (groupid=0, jobs=1): err= 0: pid=1330068: Wed Jul 24 19:58:50 2024 00:29:33.797 read: IOPS=1004, BW=4020KiB/s (4116kB/s)(4120KiB/1025msec) 00:29:33.797 slat (nsec): min=5110, max=50193, avg=14820.62, stdev=5025.06 00:29:33.797 clat (usec): min=249, max=40999, avg=593.78, stdev=3092.45 00:29:33.797 lat (usec): min=264, max=41018, avg=608.60, stdev=3092.79 00:29:33.797 clat percentiles (usec): 00:29:33.797 | 1.00th=[ 262], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 281], 00:29:33.797 | 30.00th=[ 285], 40.00th=[ 306], 50.00th=[ 343], 60.00th=[ 383], 00:29:33.797 | 70.00th=[ 400], 80.00th=[ 441], 90.00th=[ 474], 95.00th=[ 506], 00:29:33.797 | 99.00th=[ 562], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:29:33.797 | 99.99th=[41157] 00:29:33.797 write: IOPS=1498, BW=5994KiB/s (6138kB/s)(6144KiB/1025msec); 0 zone resets 00:29:33.797 slat (nsec): min=6361, max=49141, avg=11633.91, stdev=6286.09 00:29:33.797 clat (usec): min=159, max=3866, avg=240.42, stdev=139.04 00:29:33.797 lat (usec): min=167, max=3878, avg=252.06, stdev=139.83 00:29:33.797 clat percentiles (usec): 00:29:33.797 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 190], 00:29:33.797 | 30.00th=[ 202], 40.00th=[ 212], 50.00th=[ 227], 60.00th=[ 239], 00:29:33.797 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 293], 95.00th=[ 363], 00:29:33.797 | 99.00th=[ 404], 99.50th=[ 562], 99.90th=[ 3163], 99.95th=[ 3884], 00:29:33.797 | 99.99th=[ 3884] 00:29:33.797 bw ( KiB/s): min= 4456, max= 7832, per=31.14%, avg=6144.00, stdev=2387.19, samples=2 00:29:33.797 iops : min= 1114, max= 1958, avg=1536.00, stdev=596.80, samples=2 00:29:33.797 lat (usec) : 250=41.19%, 500=56.20%, 750=2.22% 00:29:33.797 lat (msec) : 2=0.08%, 4=0.08%, 50=0.23% 00:29:33.797 cpu : usr=1.76%, sys=3.32%, ctx=2567, majf=0, minf=1 00:29:33.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:33.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.797 issued rwts: total=1030,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:33.797 job3: (groupid=0, jobs=1): err= 0: pid=1330069: Wed Jul 24 19:58:50 2024 00:29:33.797 read: IOPS=521, BW=2087KiB/s (2138kB/s)(2100KiB/1006msec) 00:29:33.797 slat (nsec): min=9297, max=34488, avg=13301.27, stdev=3157.36 00:29:33.797 clat (usec): min=297, max=42020, avg=1399.50, stdev=6418.97 00:29:33.797 lat (usec): min=309, max=42036, avg=1412.80, stdev=6419.11 00:29:33.797 clat percentiles (usec): 00:29:33.797 | 1.00th=[ 310], 5.00th=[ 330], 10.00th=[ 338], 20.00th=[ 351], 00:29:33.797 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 379], 00:29:33.797 | 70.00th=[ 388], 80.00th=[ 400], 90.00th=[ 424], 95.00th=[ 494], 00:29:33.797 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:33.797 | 99.99th=[42206] 00:29:33.797 write: IOPS=1017, BW=4072KiB/s (4169kB/s)(4096KiB/1006msec); 0 zone resets 00:29:33.797 slat (nsec): min=8058, max=89102, avg=11367.26, stdev=4749.36 00:29:33.797 clat (usec): min=178, max=4256, avg=240.01, stdev=169.47 00:29:33.797 lat (usec): min=187, max=4269, avg=251.37, stdev=169.58 00:29:33.797 clat percentiles (usec): 00:29:33.797 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 215], 00:29:33.797 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 235], 00:29:33.797 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 269], 00:29:33.797 | 99.00th=[ 322], 99.50th=[ 396], 99.90th=[ 3752], 99.95th=[ 4228], 00:29:33.797 | 99.99th=[ 4228] 00:29:33.797 bw ( KiB/s): min= 8192, max= 8192, per=41.52%, avg=8192.00, stdev= 0.00, samples=1 00:29:33.797 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:29:33.797 lat (usec) : 250=56.68%, 500=41.58%, 750=0.77% 00:29:33.797 lat (msec) : 4=0.06%, 10=0.06%, 50=0.84% 00:29:33.797 cpu : usr=0.90%, sys=2.29%, ctx=1551, majf=0, minf=1 00:29:33.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:33.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:33.797 issued rwts: total=525,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:33.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:33.797 00:29:33.797 Run status group 0 (all jobs): 00:29:33.797 READ: bw=13.9MiB/s (14.6MB/s), 2087KiB/s-5934KiB/s (2138kB/s-6076kB/s), io=14.5MiB (15.2MB), run=1001-1038msec 00:29:33.797 WRITE: bw=19.3MiB/s (20.2MB/s), 3946KiB/s-6138KiB/s (4041kB/s-6285kB/s), io=20.0MiB (21.0MB), run=1001-1038msec 00:29:33.798 00:29:33.798 Disk stats (read/write): 00:29:33.798 nvme0n1: ios=708/1024, merge=0/0, ticks=601/220, in_queue=821, util=86.77% 00:29:33.798 nvme0n2: ios=1074/1443, merge=0/0, ticks=709/295, in_queue=1004, util=93.60% 00:29:33.798 nvme0n3: ios=1073/1536, merge=0/0, ticks=637/362, in_queue=999, util=98.12% 00:29:33.798 nvme0n4: ios=581/1024, merge=0/0, ticks=853/241, in_queue=1094, util=97.68% 00:29:33.798 19:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:29:33.798 [global] 00:29:33.798 thread=1 00:29:33.798 invalidate=1 00:29:33.798 rw=write 00:29:33.798 time_based=1 00:29:33.798 runtime=1 00:29:33.798 ioengine=libaio 00:29:33.798 direct=1 00:29:33.798 bs=4096 00:29:33.798 iodepth=128 00:29:33.798 norandommap=0 00:29:33.798 numjobs=1 00:29:33.798 00:29:33.798 verify_dump=1 00:29:33.798 verify_backlog=512 00:29:33.798 verify_state_save=0 00:29:33.798 do_verify=1 00:29:33.798 verify=crc32c-intel 00:29:33.798 [job0] 00:29:33.798 filename=/dev/nvme0n1 00:29:33.798 [job1] 00:29:33.798 filename=/dev/nvme0n2 00:29:33.798 [job2] 00:29:33.798 filename=/dev/nvme0n3 00:29:33.798 [job3] 00:29:33.798 filename=/dev/nvme0n4 00:29:33.798 Could not set queue depth (nvme0n1) 00:29:33.798 Could not set queue depth (nvme0n2) 00:29:33.798 Could not set queue depth (nvme0n3) 00:29:33.798 Could not set queue depth (nvme0n4) 00:29:34.056 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:34.056 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:34.056 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:34.056 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:34.056 fio-3.35 00:29:34.056 Starting 4 threads 00:29:35.429 00:29:35.429 job0: (groupid=0, jobs=1): err= 0: pid=1330294: Wed Jul 24 19:58:52 2024 00:29:35.429 read: IOPS=2779, BW=10.9MiB/s (11.4MB/s)(11.3MiB/1044msec) 00:29:35.429 slat (usec): min=2, max=26214, avg=187.99, stdev=1363.72 00:29:35.429 clat (usec): min=9009, max=79864, avg=26344.11, stdev=18477.16 00:29:35.429 lat (usec): min=9021, max=79868, avg=26532.10, stdev=18565.03 00:29:35.429 clat percentiles (usec): 00:29:35.429 | 1.00th=[ 9634], 5.00th=[10945], 10.00th=[11731], 20.00th=[12125], 00:29:35.429 | 30.00th=[12518], 40.00th=[13042], 50.00th=[18482], 60.00th=[21890], 00:29:35.429 | 70.00th=[31065], 80.00th=[44827], 90.00th=[54264], 95.00th=[63701], 00:29:35.429 | 99.00th=[80217], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:29:35.429 | 99.99th=[80217] 00:29:35.429 write: IOPS=2942, BW=11.5MiB/s (12.1MB/s)(12.0MiB/1044msec); 0 zone resets 00:29:35.429 slat (usec): min=3, max=21540, avg=140.12, stdev=1012.25 00:29:35.429 clat (usec): min=8202, max=50310, avg=18208.44, stdev=8994.96 00:29:35.429 lat (usec): min=8207, max=50314, avg=18348.56, stdev=9058.80 00:29:35.429 clat percentiles (usec): 00:29:35.429 | 1.00th=[ 8848], 5.00th=[10945], 10.00th=[11207], 20.00th=[11863], 00:29:35.429 | 30.00th=[12125], 40.00th=[12649], 50.00th=[13042], 60.00th=[15664], 00:29:35.429 | 70.00th=[21365], 80.00th=[24511], 90.00th=[34341], 95.00th=[38536], 00:29:35.429 | 99.00th=[45351], 99.50th=[46400], 99.90th=[50070], 99.95th=[50070], 00:29:35.429 | 99.99th=[50070] 00:29:35.429 bw ( KiB/s): min=10912, max=13664, per=20.26%, avg=12288.00, stdev=1945.96, samples=2 00:29:35.429 iops : min= 2728, max= 3416, avg=3072.00, stdev=486.49, samples=2 00:29:35.429 lat (msec) : 10=2.11%, 20=60.03%, 50=29.71%, 100=8.15% 00:29:35.429 cpu : usr=2.01%, sys=2.59%, ctx=259, majf=0, minf=9 00:29:35.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:29:35.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:35.429 issued rwts: total=2902,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:35.429 job1: (groupid=0, jobs=1): err= 0: pid=1330295: Wed Jul 24 19:58:52 2024 00:29:35.429 read: IOPS=4754, BW=18.6MiB/s (19.5MB/s)(19.4MiB/1047msec) 00:29:35.429 slat (usec): min=2, max=20782, avg=105.63, stdev=784.13 00:29:35.429 clat (usec): min=3521, max=54575, avg=13505.36, stdev=7615.87 00:29:35.429 lat (usec): min=3531, max=60907, avg=13610.99, stdev=7648.67 00:29:35.429 clat percentiles (usec): 00:29:35.429 | 1.00th=[ 4948], 5.00th=[ 7504], 10.00th=[ 8455], 20.00th=[ 9765], 00:29:35.429 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11863], 60.00th=[12387], 00:29:35.429 | 70.00th=[13042], 80.00th=[15401], 90.00th=[19268], 95.00th=[22414], 00:29:35.429 | 99.00th=[51119], 99.50th=[51119], 99.90th=[54789], 99.95th=[54789], 00:29:35.429 | 99.99th=[54789] 00:29:35.429 write: IOPS=4890, BW=19.1MiB/s (20.0MB/s)(20.0MiB/1047msec); 0 zone resets 00:29:35.429 slat (usec): min=4, max=22854, avg=86.85, stdev=515.98 00:29:35.429 clat (usec): min=2269, max=46916, avg=12801.52, stdev=5497.28 00:29:35.429 lat (usec): min=2276, max=46933, avg=12888.37, stdev=5545.86 00:29:35.429 clat percentiles (usec): 00:29:35.429 | 1.00th=[ 3195], 5.00th=[ 5473], 10.00th=[ 6783], 20.00th=[ 9896], 00:29:35.429 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12125], 00:29:35.429 | 70.00th=[12518], 80.00th=[14615], 90.00th=[22414], 95.00th=[25560], 00:29:35.429 | 99.00th=[29230], 99.50th=[29230], 99.90th=[29230], 99.95th=[37487], 00:29:35.429 | 99.99th=[46924] 00:29:35.429 bw ( KiB/s): min=18800, max=22160, per=33.77%, avg=20480.00, stdev=2375.88, samples=2 00:29:35.429 iops : min= 4700, max= 5540, avg=5120.00, stdev=593.97, samples=2 00:29:35.429 lat (msec) : 4=1.21%, 10=21.70%, 20=66.17%, 50=9.68%, 100=1.25% 00:29:35.429 cpu : usr=3.73%, sys=6.41%, ctx=684, majf=0, minf=13 00:29:35.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:29:35.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:35.429 issued rwts: total=4978,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:35.429 job2: (groupid=0, jobs=1): err= 0: pid=1330296: Wed Jul 24 19:58:52 2024 00:29:35.429 read: IOPS=3528, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1005msec) 00:29:35.429 slat (usec): min=3, max=11363, avg=135.77, stdev=781.26 00:29:35.429 clat (usec): min=1000, max=40169, avg=17557.70, stdev=6064.78 00:29:35.429 lat (usec): min=5395, max=40183, avg=17693.47, stdev=6123.18 00:29:35.429 clat percentiles (usec): 00:29:35.429 | 1.00th=[ 8979], 5.00th=[11469], 10.00th=[11863], 20.00th=[12518], 00:29:35.429 | 30.00th=[13173], 40.00th=[14353], 50.00th=[15401], 60.00th=[17433], 00:29:35.430 | 70.00th=[19530], 80.00th=[22676], 90.00th=[26870], 95.00th=[29492], 00:29:35.430 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36439], 99.95th=[39584], 00:29:35.430 | 99.99th=[40109] 00:29:35.430 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:29:35.430 slat (usec): min=4, max=19995, avg=139.48, stdev=958.25 00:29:35.430 clat (usec): min=6524, max=36403, avg=17406.24, stdev=4975.02 00:29:35.430 lat (usec): min=7235, max=36410, avg=17545.71, stdev=5064.10 00:29:35.430 clat percentiles (usec): 00:29:35.430 | 1.00th=[11469], 5.00th=[11994], 10.00th=[12125], 20.00th=[13173], 00:29:35.430 | 30.00th=[13566], 40.00th=[14746], 50.00th=[16057], 60.00th=[17695], 00:29:35.430 | 70.00th=[19530], 80.00th=[22152], 90.00th=[24511], 95.00th=[27657], 00:29:35.430 | 99.00th=[30278], 99.50th=[30540], 99.90th=[33162], 99.95th=[35390], 00:29:35.430 | 99.99th=[36439] 00:29:35.430 bw ( KiB/s): min=12288, max=16384, per=23.64%, avg=14336.00, stdev=2896.31, samples=2 00:29:35.430 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:29:35.430 lat (msec) : 2=0.01%, 10=0.91%, 20=70.55%, 50=28.53% 00:29:35.430 cpu : usr=3.78%, sys=3.49%, ctx=245, majf=0, minf=17 00:29:35.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:29:35.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:35.430 issued rwts: total=3546,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.430 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:35.430 job3: (groupid=0, jobs=1): err= 0: pid=1330297: Wed Jul 24 19:58:52 2024 00:29:35.430 read: IOPS=3576, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:29:35.430 slat (usec): min=2, max=15008, avg=129.08, stdev=809.34 00:29:35.430 clat (usec): min=934, max=40156, avg=16240.12, stdev=5288.14 00:29:35.430 lat (usec): min=8870, max=40170, avg=16369.21, stdev=5345.10 00:29:35.430 clat percentiles (usec): 00:29:35.430 | 1.00th=[ 9765], 5.00th=[11469], 10.00th=[11863], 20.00th=[12518], 00:29:35.430 | 30.00th=[13042], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222], 00:29:35.430 | 70.00th=[15401], 80.00th=[22414], 90.00th=[25035], 95.00th=[27395], 00:29:35.430 | 99.00th=[30016], 99.50th=[33817], 99.90th=[37487], 99.95th=[39584], 00:29:35.430 | 99.99th=[40109] 00:29:35.430 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:29:35.430 slat (usec): min=3, max=22009, avg=127.19, stdev=814.23 00:29:35.430 clat (usec): min=8171, max=38307, avg=16110.82, stdev=6304.99 00:29:35.430 lat (usec): min=8177, max=38313, avg=16238.00, stdev=6356.56 00:29:35.430 clat percentiles (usec): 00:29:35.430 | 1.00th=[ 9372], 5.00th=[11863], 10.00th=[12256], 20.00th=[12649], 00:29:35.430 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13698], 60.00th=[14484], 00:29:35.430 | 70.00th=[15270], 80.00th=[17171], 90.00th=[26084], 95.00th=[33817], 00:29:35.430 | 99.00th=[38011], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:29:35.430 | 99.99th=[38536] 00:29:35.430 bw ( KiB/s): min=12544, max=19280, per=26.24%, avg=15912.00, stdev=4763.07, samples=2 00:29:35.430 iops : min= 3136, max= 4820, avg=3978.00, stdev=1190.77, samples=2 00:29:35.430 lat (usec) : 1000=0.01% 00:29:35.430 lat (msec) : 10=1.27%, 20=79.84%, 50=18.87% 00:29:35.430 cpu : usr=2.79%, sys=3.09%, ctx=430, majf=0, minf=11 00:29:35.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:35.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:35.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:35.430 issued rwts: total=3594,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:35.430 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:35.430 00:29:35.430 Run status group 0 (all jobs): 00:29:35.430 READ: bw=56.0MiB/s (58.8MB/s), 10.9MiB/s-18.6MiB/s (11.4MB/s-19.5MB/s), io=58.7MiB (61.5MB), run=1005-1047msec 00:29:35.430 WRITE: bw=59.2MiB/s (62.1MB/s), 11.5MiB/s-19.1MiB/s (12.1MB/s-20.0MB/s), io=62.0MiB (65.0MB), run=1005-1047msec 00:29:35.430 00:29:35.430 Disk stats (read/write): 00:29:35.430 nvme0n1: ios=2071/2336, merge=0/0, ticks=20030/20560, in_queue=40590, util=98.70% 00:29:35.430 nvme0n2: ios=4135/4375, merge=0/0, ticks=50187/54576, in_queue=104763, util=97.97% 00:29:35.430 nvme0n3: ios=3093/3079, merge=0/0, ticks=26815/24686, in_queue=51501, util=97.71% 00:29:35.430 nvme0n4: ios=3397/3584, merge=0/0, ticks=21647/17906, in_queue=39553, util=98.21% 00:29:35.430 19:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:29:35.430 [global] 00:29:35.430 thread=1 00:29:35.430 invalidate=1 00:29:35.430 rw=randwrite 00:29:35.430 time_based=1 00:29:35.430 runtime=1 00:29:35.430 ioengine=libaio 00:29:35.430 direct=1 00:29:35.430 bs=4096 00:29:35.430 iodepth=128 00:29:35.430 norandommap=0 00:29:35.430 numjobs=1 00:29:35.430 00:29:35.430 verify_dump=1 00:29:35.430 verify_backlog=512 00:29:35.430 verify_state_save=0 00:29:35.430 do_verify=1 00:29:35.430 verify=crc32c-intel 00:29:35.430 [job0] 00:29:35.430 filename=/dev/nvme0n1 00:29:35.430 [job1] 00:29:35.430 filename=/dev/nvme0n2 00:29:35.430 [job2] 00:29:35.430 filename=/dev/nvme0n3 00:29:35.430 [job3] 00:29:35.430 filename=/dev/nvme0n4 00:29:35.430 Could not set queue depth (nvme0n1) 00:29:35.430 Could not set queue depth (nvme0n2) 00:29:35.430 Could not set queue depth (nvme0n3) 00:29:35.430 Could not set queue depth (nvme0n4) 00:29:35.430 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:35.430 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:35.430 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:35.430 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:29:35.430 fio-3.35 00:29:35.430 Starting 4 threads 00:29:36.804 00:29:36.804 job0: (groupid=0, jobs=1): err= 0: pid=1330523: Wed Jul 24 19:58:53 2024 00:29:36.804 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:29:36.804 slat (usec): min=2, max=12393, avg=87.76, stdev=571.86 00:29:36.804 clat (usec): min=5562, max=25883, avg=11422.57, stdev=2454.63 00:29:36.804 lat (usec): min=5574, max=25938, avg=11510.33, stdev=2490.49 00:29:36.804 clat percentiles (usec): 00:29:36.804 | 1.00th=[ 5669], 5.00th=[ 7963], 10.00th=[ 9110], 20.00th=[10028], 00:29:36.804 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11207], 60.00th=[11469], 00:29:36.804 | 70.00th=[11994], 80.00th=[12780], 90.00th=[13698], 95.00th=[15533], 00:29:36.804 | 99.00th=[21365], 99.50th=[22676], 99.90th=[23987], 99.95th=[24249], 00:29:36.804 | 99.99th=[25822] 00:29:36.804 write: IOPS=5708, BW=22.3MiB/s (23.4MB/s)(22.4MiB/1006msec); 0 zone resets 00:29:36.804 slat (usec): min=3, max=11356, avg=79.38, stdev=529.01 00:29:36.804 clat (usec): min=914, max=24066, avg=11003.94, stdev=2321.93 00:29:36.804 lat (usec): min=919, max=24072, avg=11083.32, stdev=2358.56 00:29:36.804 clat percentiles (usec): 00:29:36.804 | 1.00th=[ 5735], 5.00th=[ 6587], 10.00th=[ 8225], 20.00th=[ 9896], 00:29:36.804 | 30.00th=[10290], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:29:36.804 | 70.00th=[11338], 80.00th=[12125], 90.00th=[13829], 95.00th=[14877], 00:29:36.804 | 99.00th=[17695], 99.50th=[17957], 99.90th=[21627], 99.95th=[21890], 00:29:36.804 | 99.99th=[23987] 00:29:36.804 bw ( KiB/s): min=22236, max=22920, per=34.43%, avg=22578.00, stdev=483.66, samples=2 00:29:36.804 iops : min= 5559, max= 5730, avg=5644.50, stdev=120.92, samples=2 00:29:36.804 lat (usec) : 1000=0.04% 00:29:36.804 lat (msec) : 2=0.01%, 10=19.69%, 20=79.32%, 50=0.93% 00:29:36.804 cpu : usr=5.17%, sys=9.75%, ctx=415, majf=0, minf=11 00:29:36.804 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:29:36.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:36.804 issued rwts: total=5632,5743,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:36.804 job1: (groupid=0, jobs=1): err= 0: pid=1330524: Wed Jul 24 19:58:53 2024 00:29:36.804 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:29:36.804 slat (usec): min=2, max=25590, avg=143.53, stdev=1067.20 00:29:36.804 clat (usec): min=8062, max=70702, avg=18044.12, stdev=13460.91 00:29:36.804 lat (usec): min=8076, max=84743, avg=18187.66, stdev=13598.07 00:29:36.804 clat percentiles (usec): 00:29:36.804 | 1.00th=[ 8848], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[10814], 00:29:36.804 | 30.00th=[11207], 40.00th=[11469], 50.00th=[12387], 60.00th=[12911], 00:29:36.804 | 70.00th=[13960], 80.00th=[20317], 90.00th=[43779], 95.00th=[53216], 00:29:36.804 | 99.00th=[62653], 99.50th=[65799], 99.90th=[69731], 99.95th=[70779], 00:29:36.804 | 99.99th=[70779] 00:29:36.804 write: IOPS=3300, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1008msec); 0 zone resets 00:29:36.804 slat (usec): min=3, max=22851, avg=156.70, stdev=1164.65 00:29:36.804 clat (usec): min=1352, max=129621, avg=21755.21, stdev=22838.28 00:29:36.804 lat (msec): min=2, max=129, avg=21.91, stdev=22.99 00:29:36.804 clat percentiles (msec): 00:29:36.804 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 11], 00:29:36.804 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 14], 00:29:36.804 | 70.00th=[ 23], 80.00th=[ 27], 90.00th=[ 40], 95.00th=[ 70], 00:29:36.804 | 99.00th=[ 123], 99.50th=[ 125], 99.90th=[ 130], 99.95th=[ 130], 00:29:36.804 | 99.99th=[ 130] 00:29:36.804 bw ( KiB/s): min= 5112, max=20480, per=19.51%, avg=12796.00, stdev=10866.82, samples=2 00:29:36.804 iops : min= 1278, max= 5120, avg=3199.00, stdev=2716.70, samples=2 00:29:36.804 lat (msec) : 2=0.02%, 4=0.09%, 10=10.03%, 20=62.35%, 50=20.94% 00:29:36.804 lat (msec) : 100=4.83%, 250=1.73% 00:29:36.804 cpu : usr=4.37%, sys=5.16%, ctx=232, majf=0, minf=13 00:29:36.804 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:29:36.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:36.804 issued rwts: total=3072,3327,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:36.804 job2: (groupid=0, jobs=1): err= 0: pid=1330525: Wed Jul 24 19:58:53 2024 00:29:36.804 read: IOPS=3540, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1004msec) 00:29:36.804 slat (usec): min=2, max=23312, avg=156.60, stdev=1096.36 00:29:36.804 clat (usec): min=2867, max=67856, avg=19737.79, stdev=10513.58 00:29:36.804 lat (usec): min=7776, max=67874, avg=19894.39, stdev=10573.99 00:29:36.804 clat percentiles (usec): 00:29:36.804 | 1.00th=[ 8979], 5.00th=[10552], 10.00th=[11600], 20.00th=[13042], 00:29:36.804 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14353], 60.00th=[17433], 00:29:36.804 | 70.00th=[23462], 80.00th=[25560], 90.00th=[32375], 95.00th=[39584], 00:29:36.804 | 99.00th=[60556], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:29:36.804 | 99.99th=[67634] 00:29:36.804 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:29:36.804 slat (usec): min=3, max=17940, avg=116.04, stdev=701.06 00:29:36.804 clat (usec): min=989, max=46550, avg=15955.00, stdev=5372.96 00:29:36.804 lat (usec): min=998, max=46568, avg=16071.03, stdev=5410.41 00:29:36.804 clat percentiles (usec): 00:29:36.804 | 1.00th=[ 9765], 5.00th=[10945], 10.00th=[11994], 20.00th=[12387], 00:29:36.804 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[14484], 00:29:36.804 | 70.00th=[16581], 80.00th=[20579], 90.00th=[25297], 95.00th=[25822], 00:29:36.804 | 99.00th=[35390], 99.50th=[35390], 99.90th=[38536], 99.95th=[46400], 00:29:36.804 | 99.99th=[46400] 00:29:36.804 bw ( KiB/s): min=12288, max=16384, per=21.86%, avg=14336.00, stdev=2896.31, samples=2 00:29:36.804 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:29:36.804 lat (usec) : 1000=0.03% 00:29:36.804 lat (msec) : 4=0.01%, 10=1.67%, 20=69.79%, 50=27.17%, 100=1.33% 00:29:36.804 cpu : usr=2.59%, sys=6.28%, ctx=369, majf=0, minf=9 00:29:36.804 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:29:36.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.804 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:36.804 issued rwts: total=3555,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.804 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:36.804 job3: (groupid=0, jobs=1): err= 0: pid=1330526: Wed Jul 24 19:58:53 2024 00:29:36.804 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:29:36.804 slat (usec): min=3, max=22875, avg=142.12, stdev=1154.21 00:29:36.804 clat (usec): min=8810, max=47164, avg=18754.93, stdev=6274.61 00:29:36.804 lat (usec): min=8818, max=47176, avg=18897.05, stdev=6381.18 00:29:36.804 clat percentiles (usec): 00:29:36.804 | 1.00th=[11338], 5.00th=[11600], 10.00th=[11863], 20.00th=[12780], 00:29:36.804 | 30.00th=[13960], 40.00th=[15795], 50.00th=[17433], 60.00th=[18744], 00:29:36.805 | 70.00th=[22676], 80.00th=[24249], 90.00th=[26346], 95.00th=[28181], 00:29:36.805 | 99.00th=[40109], 99.50th=[43254], 99.90th=[45876], 99.95th=[46924], 00:29:36.805 | 99.99th=[46924] 00:29:36.805 write: IOPS=3850, BW=15.0MiB/s (15.8MB/s)(15.1MiB/1005msec); 0 zone resets 00:29:36.805 slat (usec): min=4, max=19349, avg=118.50, stdev=992.55 00:29:36.805 clat (usec): min=2203, max=45939, avg=15524.54, stdev=5271.56 00:29:36.805 lat (usec): min=3291, max=45968, avg=15643.03, stdev=5357.18 00:29:36.805 clat percentiles (usec): 00:29:36.805 | 1.00th=[ 6980], 5.00th=[ 8160], 10.00th=[ 9503], 20.00th=[11207], 00:29:36.805 | 30.00th=[12256], 40.00th=[13304], 50.00th=[14222], 60.00th=[15926], 00:29:36.805 | 70.00th=[17695], 80.00th=[21365], 90.00th=[23987], 95.00th=[25560], 00:29:36.805 | 99.00th=[26084], 99.50th=[26084], 99.90th=[37487], 99.95th=[45876], 00:29:36.805 | 99.99th=[45876] 00:29:36.805 bw ( KiB/s): min=13040, max=16896, per=22.83%, avg=14968.00, stdev=2726.60, samples=2 00:29:36.805 iops : min= 3260, max= 4224, avg=3742.00, stdev=681.65, samples=2 00:29:36.805 lat (msec) : 4=0.09%, 10=7.34%, 20=63.64%, 50=28.92% 00:29:36.805 cpu : usr=4.38%, sys=6.47%, ctx=213, majf=0, minf=17 00:29:36.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:29:36.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:36.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:36.805 issued rwts: total=3584,3870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:36.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:36.805 00:29:36.805 Run status group 0 (all jobs): 00:29:36.805 READ: bw=61.4MiB/s (64.4MB/s), 11.9MiB/s-21.9MiB/s (12.5MB/s-22.9MB/s), io=61.9MiB (64.9MB), run=1004-1008msec 00:29:36.805 WRITE: bw=64.0MiB/s (67.1MB/s), 12.9MiB/s-22.3MiB/s (13.5MB/s-23.4MB/s), io=64.5MiB (67.7MB), run=1004-1008msec 00:29:36.805 00:29:36.805 Disk stats (read/write): 00:29:36.805 nvme0n1: ios=4658/4927, merge=0/0, ticks=33374/32496, in_queue=65870, util=85.67% 00:29:36.805 nvme0n2: ios=3120/3111, merge=0/0, ticks=31728/25013, in_queue=56741, util=89.24% 00:29:36.805 nvme0n3: ios=2947/3072, merge=0/0, ticks=32056/29002, in_queue=61058, util=93.65% 00:29:36.805 nvme0n4: ios=2926/3072, merge=0/0, ticks=56794/48423, in_queue=105217, util=94.23% 00:29:36.805 19:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:29:36.805 19:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1330667 00:29:36.805 19:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:29:36.805 19:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:29:36.805 [global] 00:29:36.805 thread=1 00:29:36.805 invalidate=1 00:29:36.805 rw=read 00:29:36.805 time_based=1 00:29:36.805 runtime=10 00:29:36.805 ioengine=libaio 00:29:36.805 direct=1 00:29:36.805 bs=4096 00:29:36.805 iodepth=1 00:29:36.805 norandommap=1 00:29:36.805 numjobs=1 00:29:36.805 00:29:36.805 [job0] 00:29:36.805 filename=/dev/nvme0n1 00:29:36.805 [job1] 00:29:36.805 filename=/dev/nvme0n2 00:29:36.805 [job2] 00:29:36.805 filename=/dev/nvme0n3 00:29:36.805 [job3] 00:29:36.805 filename=/dev/nvme0n4 00:29:36.805 Could not set queue depth (nvme0n1) 00:29:36.805 Could not set queue depth (nvme0n2) 00:29:36.805 Could not set queue depth (nvme0n3) 00:29:36.805 Could not set queue depth (nvme0n4) 00:29:36.805 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:36.805 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:36.805 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:36.805 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:29:36.805 fio-3.35 00:29:36.805 Starting 4 threads 00:29:40.080 19:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:29:40.080 19:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:29:40.080 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=3514368, buflen=4096 00:29:40.080 fio: pid=1330876, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:29:40.337 19:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:40.337 19:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:29:40.337 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=360448, buflen=4096 00:29:40.337 fio: pid=1330875, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:29:40.595 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=5648384, buflen=4096 00:29:40.595 fio: pid=1330873, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:29:40.595 19:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:40.595 19:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:29:40.853 19:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:40.853 19:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:29:40.853 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=2547712, buflen=4096 00:29:40.853 fio: pid=1330874, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:29:40.853 00:29:40.853 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1330873: Wed Jul 24 19:58:58 2024 00:29:40.853 read: IOPS=397, BW=1588KiB/s (1626kB/s)(5516KiB/3473msec) 00:29:40.853 slat (usec): min=4, max=14626, avg=23.23, stdev=421.14 00:29:40.853 clat (usec): min=195, max=41334, avg=2477.03, stdev=9240.06 00:29:40.853 lat (usec): min=200, max=41368, avg=2500.27, stdev=9249.60 00:29:40.853 clat percentiles (usec): 00:29:40.853 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:29:40.853 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 243], 00:29:40.853 | 70.00th=[ 249], 80.00th=[ 273], 90.00th=[ 375], 95.00th=[41157], 00:29:40.853 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:40.853 | 99.99th=[41157] 00:29:40.853 bw ( KiB/s): min= 96, max= 1080, per=8.76%, avg=272.00, stdev=395.98, samples=6 00:29:40.853 iops : min= 24, max= 270, avg=68.00, stdev=98.99, samples=6 00:29:40.853 lat (usec) : 250=70.29%, 500=23.62%, 750=0.36% 00:29:40.853 lat (msec) : 2=0.07%, 10=0.14%, 50=5.43% 00:29:40.853 cpu : usr=0.20%, sys=0.35%, ctx=1383, majf=0, minf=1 00:29:40.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:40.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.853 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.853 issued rwts: total=1380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:40.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:40.853 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1330874: Wed Jul 24 19:58:58 2024 00:29:40.853 read: IOPS=164, BW=655KiB/s (671kB/s)(2488KiB/3796msec) 00:29:40.853 slat (usec): min=4, max=14813, avg=86.32, stdev=897.91 00:29:40.853 clat (usec): min=209, max=42225, avg=6012.84, stdev=14252.00 00:29:40.853 lat (usec): min=221, max=52984, avg=6088.42, stdev=14337.32 00:29:40.853 clat percentiles (usec): 00:29:40.853 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 245], 00:29:40.853 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 293], 00:29:40.853 | 70.00th=[ 314], 80.00th=[ 396], 90.00th=[41157], 95.00th=[42206], 00:29:40.853 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:29:40.853 | 99.99th=[42206] 00:29:40.853 bw ( KiB/s): min= 96, max= 2504, per=21.48%, avg=667.57, stdev=920.44, samples=7 00:29:40.853 iops : min= 24, max= 626, avg=166.86, stdev=230.08, samples=7 00:29:40.853 lat (usec) : 250=25.36%, 500=58.91%, 750=1.61% 00:29:40.853 lat (msec) : 20=0.16%, 50=13.80% 00:29:40.853 cpu : usr=0.18%, sys=0.32%, ctx=630, majf=0, minf=1 00:29:40.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:40.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.853 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.853 issued rwts: total=623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:40.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:40.853 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1330875: Wed Jul 24 19:58:58 2024 00:29:40.853 read: IOPS=27, BW=108KiB/s (111kB/s)(352KiB/3249msec) 00:29:40.853 slat (nsec): min=7240, max=37566, avg=21066.75, stdev=10145.44 00:29:40.853 clat (usec): min=297, max=41396, avg=36550.61, stdev=12544.57 00:29:40.853 lat (usec): min=305, max=41428, avg=36571.78, stdev=12543.14 00:29:40.853 clat percentiles (usec): 00:29:40.853 | 1.00th=[ 297], 5.00th=[ 453], 10.00th=[ 1020], 20.00th=[41157], 00:29:40.853 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:40.853 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:29:40.853 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:29:40.853 | 99.99th=[41157] 00:29:40.853 bw ( KiB/s): min= 96, max= 144, per=3.51%, avg=109.33, stdev=19.38, samples=6 00:29:40.853 iops : min= 24, max= 36, avg=27.33, stdev= 4.84, samples=6 00:29:40.853 lat (usec) : 500=7.87%, 750=1.12% 00:29:40.853 lat (msec) : 2=1.12%, 20=1.12%, 50=87.64% 00:29:40.853 cpu : usr=0.12%, sys=0.00%, ctx=89, majf=0, minf=1 00:29:40.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:40.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.853 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.853 issued rwts: total=89,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:40.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:40.853 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1330876: Wed Jul 24 19:58:58 2024 00:29:40.853 read: IOPS=288, BW=1152KiB/s (1180kB/s)(3432KiB/2979msec) 00:29:40.853 slat (nsec): min=5298, max=63490, avg=19512.04, stdev=9539.48 00:29:40.853 clat (usec): min=221, max=41518, avg=3415.06, stdev=10844.17 00:29:40.853 lat (usec): min=228, max=41551, avg=3434.58, stdev=10844.31 00:29:40.853 clat percentiles (usec): 00:29:40.853 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 251], 00:29:40.853 | 30.00th=[ 260], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:29:40.853 | 70.00th=[ 310], 80.00th=[ 330], 90.00th=[ 490], 95.00th=[41157], 00:29:40.853 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:29:40.853 | 99.99th=[41681] 00:29:40.853 bw ( KiB/s): min= 96, max= 5248, per=37.19%, avg=1155.20, stdev=2288.26, samples=5 00:29:40.853 iops : min= 24, max= 1312, avg=288.80, stdev=572.06, samples=5 00:29:40.853 lat (usec) : 250=20.02%, 500=70.08%, 750=1.98%, 1000=0.12% 00:29:40.853 lat (msec) : 50=7.68% 00:29:40.853 cpu : usr=0.30%, sys=0.60%, ctx=859, majf=0, minf=1 00:29:40.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:40.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.853 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.853 issued rwts: total=859,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:40.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:29:40.853 00:29:40.853 Run status group 0 (all jobs): 00:29:40.853 READ: bw=3105KiB/s (3180kB/s), 108KiB/s-1588KiB/s (111kB/s-1626kB/s), io=11.5MiB (12.1MB), run=2979-3796msec 00:29:40.853 00:29:40.853 Disk stats (read/write): 00:29:40.853 nvme0n1: ios=1009/0, merge=0/0, ticks=3675/0, in_queue=3675, util=99.46% 00:29:40.853 nvme0n2: ios=617/0, merge=0/0, ticks=3533/0, in_queue=3533, util=95.58% 00:29:40.853 nvme0n3: ios=113/0, merge=0/0, ticks=3201/0, in_queue=3201, util=98.85% 00:29:40.853 nvme0n4: ios=855/0, merge=0/0, ticks=2801/0, in_queue=2801, util=96.71% 00:29:41.111 19:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:41.111 19:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:29:41.369 19:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:41.369 19:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:29:41.627 19:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:41.627 19:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:29:41.885 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:29:41.885 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:29:42.143 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:29:42.143 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1330667 00:29:42.143 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:29:42.143 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:29:42.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:29:42.400 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:29:42.400 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1220 -- # local i=0 00:29:42.400 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # lsblk -o NAME,SERIAL 00:29:42.400 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:42.400 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1228 -- # lsblk -l -o NAME,SERIAL 00:29:42.400 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1228 -- # grep -q -w SPDKISFASTANDAWESOME 00:29:42.400 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1232 -- # return 0 00:29:42.400 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:29:42.400 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:29:42.400 nvmf hotplug test: fio failed as expected 00:29:42.400 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # nvmfcleanup 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.658 rmmod nvme_tcp 00:29:42.658 rmmod nvme_fabrics 00:29:42.658 rmmod nvme_keyring 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # '[' -n 1328774 ']' 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@494 -- # killprocess 1328774 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' -z 1328774 ']' 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@955 -- # kill -0 1328774 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # uname 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1328774 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1328774' 00:29:42.658 killing process with pid 1328774 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # kill 1328774 00:29:42.658 19:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@975 -- # wait 1328774 00:29:42.916 19:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:29:42.916 19:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:29:42.916 19:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:29:42.916 19:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:42.916 19:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@282 -- # remove_spdk_ns 00:29:42.916 19:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.916 19:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.916 19:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:29:45.449 00:29:45.449 real 0m23.191s 00:29:45.449 user 1m4.606s 00:29:45.449 sys 0m10.916s 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # xtrace_disable 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:29:45.449 ************************************ 00:29:45.449 END TEST nvmf_fio_target 00:29:45.449 ************************************ 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1108 -- # xtrace_disable 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:45.449 ************************************ 00:29:45.449 START TEST nvmf_bdevio 00:29:45.449 ************************************ 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:29:45.449 * Looking for test storage... 00:29:45.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.449 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@452 -- # prepare_net_devs 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@414 -- # local -g is_hw=no 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # remove_spdk_ns 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@289 -- # xtrace_disable 00:29:45.450 19:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@295 -- # pci_devs=() 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@295 -- # local -a pci_devs 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@296 -- # pci_net_devs=() 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # pci_drivers=() 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # local -A pci_drivers 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # net_devs=() 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # local -ga net_devs 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # e810=() 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # local -ga e810 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@301 -- # x722=() 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@301 -- # local -ga x722 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # mlx=() 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # local -ga mlx 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:47.350 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:47.350 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # [[ up == up ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:47.350 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # [[ up == up ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:47.350 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # is_hw=yes 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:29:47.350 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:29:47.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:29:47.351 00:29:47.351 --- 10.0.0.2 ping statistics --- 00:29:47.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.351 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:29:47.351 00:29:47.351 --- 10.0.0.1 ping statistics --- 00:29:47.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.351 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@426 -- # return 0 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@725 -- # xtrace_disable 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@485 -- # nvmfpid=1333486 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@486 -- # waitforlisten 1333486 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@832 -- # '[' -z 1333486 ']' 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local max_retries=100 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@841 -- # xtrace_disable 00:29:47.351 19:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:47.351 [2024-07-24 19:59:04.496804] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:47.351 [2024-07-24 19:59:04.498086] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:29:47.351 [2024-07-24 19:59:04.498144] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.351 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.351 [2024-07-24 19:59:04.571669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:47.351 [2024-07-24 19:59:04.693198] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.351 [2024-07-24 19:59:04.693277] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.351 [2024-07-24 19:59:04.693295] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.351 [2024-07-24 19:59:04.693308] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.351 [2024-07-24 19:59:04.693320] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.351 [2024-07-24 19:59:04.693403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:47.351 [2024-07-24 19:59:04.693457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:47.351 [2024-07-24 19:59:04.693507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:47.351 [2024-07-24 19:59:04.693510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.609 [2024-07-24 19:59:04.796570] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:47.609 [2024-07-24 19:59:04.796808] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:47.609 [2024-07-24 19:59:04.797102] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:47.609 [2024-07-24 19:59:04.797677] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:47.609 [2024-07-24 19:59:04.797944] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@865 -- # return 0 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@731 -- # xtrace_disable 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:48.176 [2024-07-24 19:59:05.510329] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:48.176 Malloc0 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:48.176 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:48.434 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:48.435 [2024-07-24 19:59:05.570455] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@536 -- # config=() 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@536 -- # local subsystem config 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:29:48.435 { 00:29:48.435 "params": { 00:29:48.435 "name": "Nvme$subsystem", 00:29:48.435 "trtype": "$TEST_TRANSPORT", 00:29:48.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.435 "adrfam": "ipv4", 00:29:48.435 "trsvcid": "$NVMF_PORT", 00:29:48.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.435 "hdgst": ${hdgst:-false}, 00:29:48.435 "ddgst": ${ddgst:-false} 00:29:48.435 }, 00:29:48.435 "method": "bdev_nvme_attach_controller" 00:29:48.435 } 00:29:48.435 EOF 00:29:48.435 )") 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@558 -- # cat 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # jq . 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@561 -- # IFS=, 00:29:48.435 19:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:29:48.435 "params": { 00:29:48.435 "name": "Nvme1", 00:29:48.435 "trtype": "tcp", 00:29:48.435 "traddr": "10.0.0.2", 00:29:48.435 "adrfam": "ipv4", 00:29:48.435 "trsvcid": "4420", 00:29:48.435 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.435 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:48.435 "hdgst": false, 00:29:48.435 "ddgst": false 00:29:48.435 }, 00:29:48.435 "method": "bdev_nvme_attach_controller" 00:29:48.435 }' 00:29:48.435 [2024-07-24 19:59:05.619346] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:29:48.435 [2024-07-24 19:59:05.619419] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333639 ] 00:29:48.435 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.435 [2024-07-24 19:59:05.680302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:48.435 [2024-07-24 19:59:05.795922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.435 [2024-07-24 19:59:05.795972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.435 [2024-07-24 19:59:05.795974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.693 I/O targets: 00:29:48.693 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:29:48.693 00:29:48.693 00:29:48.693 CUnit - A unit testing framework for C - Version 2.1-3 00:29:48.693 http://cunit.sourceforge.net/ 00:29:48.693 00:29:48.693 00:29:48.693 Suite: bdevio tests on: Nvme1n1 00:29:48.693 Test: blockdev write read block ...passed 00:29:48.950 Test: blockdev write zeroes read block ...passed 00:29:48.950 Test: blockdev write zeroes read no split ...passed 00:29:48.950 Test: blockdev write zeroes read split ...passed 00:29:48.950 Test: blockdev write zeroes read split partial ...passed 00:29:48.950 Test: blockdev reset ...[2024-07-24 19:59:06.205067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.950 [2024-07-24 19:59:06.205177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x232c580 (9): Bad file descriptor 00:29:48.950 [2024-07-24 19:59:06.249576] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:48.950 passed 00:29:48.950 Test: blockdev write read 8 blocks ...passed 00:29:48.950 Test: blockdev write read size > 128k ...passed 00:29:48.950 Test: blockdev write read invalid size ...passed 00:29:49.208 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:29:49.208 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:29:49.208 Test: blockdev write read max offset ...passed 00:29:49.208 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:29:49.208 Test: blockdev writev readv 8 blocks ...passed 00:29:49.208 Test: blockdev writev readv 30 x 1block ...passed 00:29:49.208 Test: blockdev writev readv block ...passed 00:29:49.208 Test: blockdev writev readv size > 128k ...passed 00:29:49.208 Test: blockdev writev readv size > 128k in two iovs ...passed 00:29:49.208 Test: blockdev comparev and writev ...[2024-07-24 19:59:06.544581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:49.208 [2024-07-24 19:59:06.544627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.208 [2024-07-24 19:59:06.544651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:49.208 [2024-07-24 19:59:06.544667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:49.208 [2024-07-24 19:59:06.545132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:49.208 [2024-07-24 19:59:06.545158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:49.208 [2024-07-24 19:59:06.545179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:49.208 [2024-07-24 19:59:06.545199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:49.208 [2024-07-24 19:59:06.545652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:49.208 [2024-07-24 19:59:06.545677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:49.208 [2024-07-24 19:59:06.545704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:49.208 [2024-07-24 19:59:06.545720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:49.208 [2024-07-24 19:59:06.546152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:49.208 [2024-07-24 19:59:06.546177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:49.208 [2024-07-24 19:59:06.546198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:29:49.208 [2024-07-24 19:59:06.546213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:49.498 passed 00:29:49.498 Test: blockdev nvme passthru rw ...passed 00:29:49.498 Test: blockdev nvme passthru vendor specific ...[2024-07-24 19:59:06.629557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:49.498 [2024-07-24 19:59:06.629588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:49.498 [2024-07-24 19:59:06.629748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:49.498 [2024-07-24 19:59:06.629771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:49.498 [2024-07-24 19:59:06.629921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:49.498 [2024-07-24 19:59:06.629943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:49.498 [2024-07-24 19:59:06.630086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:49.498 [2024-07-24 19:59:06.630108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:49.498 passed 00:29:49.498 Test: blockdev nvme admin passthru ...passed 00:29:49.498 Test: blockdev copy ...passed 00:29:49.498 00:29:49.498 Run Summary: Type Total Ran Passed Failed Inactive 00:29:49.498 suites 1 1 n/a 0 0 00:29:49.498 tests 23 23 23 0 0 00:29:49.498 asserts 152 152 152 0 n/a 00:29:49.498 00:29:49.498 Elapsed time = 1.348 seconds 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # nvmfcleanup 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:49.756 rmmod nvme_tcp 00:29:49.756 rmmod nvme_fabrics 00:29:49.756 rmmod nvme_keyring 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # '[' -n 1333486 ']' 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@494 -- # killprocess 1333486 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' -z 1333486 ']' 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@955 -- # kill -0 1333486 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # uname 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1333486 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # process_name=reactor_3 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@961 -- # '[' reactor_3 = sudo ']' 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1333486' 00:29:49.756 killing process with pid 1333486 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # kill 1333486 00:29:49.756 19:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@975 -- # wait 1333486 00:29:50.015 19:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:29:50.015 19:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:29:50.015 19:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:29:50.015 19:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:50.015 19:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@282 -- # remove_spdk_ns 00:29:50.015 19:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.015 19:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.015 19:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.544 19:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:29:52.544 00:29:52.544 real 0m7.036s 00:29:52.544 user 0m9.252s 00:29:52.544 sys 0m2.605s 00:29:52.544 19:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # xtrace_disable 00:29:52.544 19:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:29:52.544 ************************************ 00:29:52.544 END TEST nvmf_bdevio 00:29:52.544 ************************************ 00:29:52.544 19:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:29:52.544 00:29:52.544 real 3m53.482s 00:29:52.544 user 8m35.294s 00:29:52.544 sys 1m31.550s 00:29:52.544 19:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # xtrace_disable 00:29:52.544 19:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:52.544 ************************************ 00:29:52.544 END TEST nvmf_target_core_interrupt_mode 00:29:52.544 ************************************ 00:29:52.544 19:59:09 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:29:52.544 19:59:09 nvmf_tcp -- common/autotest_common.sh@1102 -- # '[' 4 -le 1 ']' 00:29:52.544 19:59:09 nvmf_tcp -- common/autotest_common.sh@1108 -- # xtrace_disable 00:29:52.544 19:59:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:52.544 ************************************ 00:29:52.544 START TEST nvmf_interrupt 00:29:52.544 ************************************ 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:29:52.544 * Looking for test storage... 00:29:52.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # nvmftestinit 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@452 -- # prepare_net_devs 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@414 -- # local -g is_hw=no 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # remove_spdk_ns 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@289 -- # xtrace_disable 00:29:52.544 19:59:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@295 -- # pci_devs=() 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@295 -- # local -a pci_devs 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@296 -- # pci_net_devs=() 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # pci_drivers=() 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # local -A pci_drivers 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # net_devs=() 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # local -ga net_devs 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # e810=() 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # local -ga e810 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@301 -- # x722=() 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@301 -- # local -ga x722 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # mlx=() 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # local -ga mlx 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:29:54.443 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:54.444 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:54.444 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@394 -- # [[ up == up ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:54.444 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@394 -- # [[ up == up ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:54.444 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # is_hw=yes 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:29:54.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:29:54.444 00:29:54.444 --- 10.0.0.2 ping statistics --- 00:29:54.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.444 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:54.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:29:54.444 00:29:54.444 --- 10.0.0.1 ping statistics --- 00:29:54.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.444 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@426 -- # return 0 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@13 -- # nvmfappstart -m 0x3 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@725 -- # xtrace_disable 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@485 -- # nvmfpid=1335719 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@486 -- # waitforlisten 1335719 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@832 -- # '[' -z 1335719 ']' 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local max_retries=100 00:29:54.444 19:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.445 19:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@841 -- # xtrace_disable 00:29:54.445 19:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:54.445 [2024-07-24 19:59:11.613544] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:54.445 [2024-07-24 19:59:11.614575] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:29:54.445 [2024-07-24 19:59:11.614637] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.445 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.445 [2024-07-24 19:59:11.683144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:54.445 [2024-07-24 19:59:11.801448] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.445 [2024-07-24 19:59:11.801509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.445 [2024-07-24 19:59:11.801548] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.445 [2024-07-24 19:59:11.801562] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.445 [2024-07-24 19:59:11.801582] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.445 [2024-07-24 19:59:11.803267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.445 [2024-07-24 19:59:11.803274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.703 [2024-07-24 19:59:11.902235] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:54.703 [2024-07-24 19:59:11.902336] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:54.703 [2024-07-24 19:59:11.902539] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:54.703 19:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:29:54.703 19:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@865 -- # return 0 00:29:54.703 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:29:54.703 19:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@731 -- # xtrace_disable 00:29:54.703 19:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:54.703 19:59:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.703 19:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # setup_bdev_aio 00:29:54.703 19:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@75 -- # uname -s 00:29:54.703 19:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@75 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:29:54.703 19:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@76 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:29:54.703 5000+0 records in 00:29:54.703 5000+0 records out 00:29:54.703 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0140277 s, 730 MB/s 00:29:54.703 19:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:29:54.703 19:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:54.703 19:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:54.703 AIO0 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:54.703 [2024-07-24 19:59:12.011997] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@562 -- # xtrace_disable 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:29:54.703 [2024-07-24 19:59:12.052254] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # for i in {0..1} 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@22 -- # reactor_is_idle 1335719 0 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1335719 0 idle 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1335719 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1335719 -w 256 00:29:54.703 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor='1335719 root 20 0 128.2g 46464 34176 S 0.0 0.1 0:00.33 reactor_0' 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 1335719 root 20 0 128.2g 46464 34176 S 0.0 0.1 0:00.33 reactor_0 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # for i in {0..1} 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@22 -- # reactor_is_idle 1335719 1 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1335719 1 idle 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1335719 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1335719 -w 256 00:29:54.961 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor='1335723 root 20 0 128.2g 46464 34176 S 0.0 0.1 0:00.00 reactor_1' 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 1335723 root 20 0 128.2g 46464 34176 S 0.0 0.1 0:00.00 reactor_1 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # perf_pid=1335886 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@33 -- # for i in {0..1} 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@34 -- # reactor_is_busy 1335719 0 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1335719 0 busy 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1335719 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1335719 -w 256 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:55.219 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor='1335719 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:00.33 reactor_0' 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 1335719 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:00.33 reactor_0 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ 0 -lt 70 ]] 00:29:55.219 19:59:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@29 -- # sleep 1 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j-- )) 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1335719 -w 256 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor='1335719 root 20 0 128.2g 47232 34560 R 99.9 0.1 0:01.48 reactor_0' 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 1335719 root 20 0 128.2g 47232 34560 R 99.9 0.1 0:01.48 reactor_0 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@33 -- # for i in {0..1} 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@34 -- # reactor_is_busy 1335719 1 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@47 -- # reactor_is_busy_or_idle 1335719 1 busy 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1335719 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ busy != \b\u\s\y ]] 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@18 -- # hash top 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1335719 -w 256 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor='1335723 root 20 0 128.2g 47232 34560 R 99.9 0.1 0:01.43 reactor_1' 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 1335723 root 20 0 128.2g 47232 34560 R 99.9 0.1 0:01.43 reactor_1 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=99.9 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=99 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ busy = \b\u\s\y ]] 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ 99 -lt 70 ]] 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \i\d\l\e ]] 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@33 -- # return 0 00:29:56.590 19:59:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@37 -- # wait 1335886 00:30:06.553 Initializing NVMe Controllers 00:30:06.553 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:06.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:06.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:06.553 Initialization complete. Launching workers. 00:30:06.553 ======================================================== 00:30:06.553 Latency(us) 00:30:06.553 Device Information : IOPS MiB/s Average min max 00:30:06.553 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 14379.60 56.17 4451.13 1614.52 8221.94 00:30:06.553 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 14045.70 54.87 4559.85 1828.96 43967.73 00:30:06.553 ======================================================== 00:30:06.553 Total : 28425.30 111.04 4504.85 1614.52 43967.73 00:30:06.553 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # for i in {0..1} 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@40 -- # reactor_is_idle 1335719 0 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1335719 0 idle 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1335719 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1335719 -w 256 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_0 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor='1335719 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:10.32 reactor_0' 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 1335719 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:10.32 reactor_0 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # for i in {0..1} 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@40 -- # reactor_is_idle 1335719 1 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@51 -- # reactor_is_busy_or_idle 1335719 1 idle 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1335719 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \b\u\s\y ]] 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # [[ idle != \i\d\l\e ]] 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@18 -- # hash top 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j = 10 )) 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@23 -- # (( j != 0 )) 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # top -bHn 1 -p 1335719 -w 256 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # grep reactor_1 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@24 -- # top_reactor='1335723 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:10.13 reactor_1' 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # echo 1335723 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:10.13 reactor_1 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # sed -e 's/^\s*//g' 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # awk '{print $9}' 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # cpu_rate=0.0 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # cpu_rate=0 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # [[ idle = \b\u\s\y ]] 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \i\d\l\e ]] 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ 0 -gt 30 ]] 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@33 -- # return 0 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@43 -- # cleanup 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@6 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@44 -- # nvmftestfini 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # nvmfcleanup 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:06.553 19:59:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:06.553 rmmod nvme_tcp 00:30:06.553 rmmod nvme_fabrics 00:30:06.553 rmmod nvme_keyring 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # '[' -n 1335719 ']' 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # killprocess 1335719 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@951 -- # '[' -z 1335719 ']' 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # kill -0 1335719 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # uname 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1335719 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1335719' 00:30:06.553 killing process with pid 1335719 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # kill 1335719 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@975 -- # wait 1335719 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@282 -- # remove_spdk_ns 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:06.553 19:59:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@1 -- # process_shm --id 0 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@809 -- # type=--id 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@810 -- # id=0 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@811 -- # '[' --id = --pid ']' 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@815 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@815 -- # shm_files=nvmf_trace.0 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@817 -- # [[ -z nvmf_trace.0 ]] 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@821 -- # for n in $shm_files 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@822 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:08.453 nvmf_trace.0 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@824 -- # return 0 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@1 -- # nvmftestfini 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # nvmfcleanup 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # '[' -n 1335719 ']' 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@494 -- # killprocess 1335719 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@951 -- # '[' -z 1335719 ']' 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@955 -- # kill -0 1335719 00:30:08.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 955: kill: (1335719) - No such process 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # echo 'Process with pid 1335719 is not found' 00:30:08.453 Process with pid 1335719 is not found 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@282 -- # remove_spdk_ns 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:30:08.453 00:30:08.453 real 0m16.106s 00:30:08.453 user 0m36.203s 00:30:08.453 sys 0m6.459s 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # xtrace_disable 00:30:08.453 19:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:30:08.453 ************************************ 00:30:08.453 END TEST nvmf_interrupt 00:30:08.453 ************************************ 00:30:08.453 00:30:08.453 real 23m55.992s 00:30:08.453 user 55m49.762s 00:30:08.453 sys 6m28.680s 00:30:08.453 19:59:25 nvmf_tcp -- common/autotest_common.sh@1127 -- # xtrace_disable 00:30:08.453 19:59:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.453 ************************************ 00:30:08.453 END TEST nvmf_tcp 00:30:08.453 ************************************ 00:30:08.453 19:59:25 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:30:08.453 19:59:25 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:08.453 19:59:25 -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:30:08.453 19:59:25 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:30:08.453 19:59:25 -- common/autotest_common.sh@10 -- # set +x 00:30:08.453 ************************************ 00:30:08.453 START TEST spdkcli_nvmf_tcp 00:30:08.453 ************************************ 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:08.453 * Looking for test storage... 00:30:08.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.453 19:59:25 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:08.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@725 -- # xtrace_disable 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1337484 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1337484 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # '[' -z 1337484 ']' 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local max_retries=100 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@841 -- # xtrace_disable 00:30:08.454 19:59:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.454 [2024-07-24 19:59:25.701841] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:30:08.454 [2024-07-24 19:59:25.701938] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1337484 ] 00:30:08.454 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.454 [2024-07-24 19:59:25.760375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:08.712 [2024-07-24 19:59:25.868118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:08.712 [2024-07-24 19:59:25.868123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:08.712 19:59:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:30:08.712 19:59:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@865 -- # return 0 00:30:08.712 19:59:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:08.712 19:59:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@731 -- # xtrace_disable 00:30:08.712 19:59:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.712 19:59:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:08.712 19:59:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:08.712 19:59:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:08.712 19:59:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@725 -- # xtrace_disable 00:30:08.712 19:59:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:08.712 19:59:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:08.712 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:08.712 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:08.712 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:08.712 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:08.712 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:08.712 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:08.712 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:08.712 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:08.712 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:08.712 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:08.712 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:08.712 ' 00:30:11.237 [2024-07-24 19:59:28.554964] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.608 [2024-07-24 19:59:29.779240] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:15.134 [2024-07-24 19:59:32.058396] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:17.040 [2024-07-24 19:59:34.024487] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:18.432 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:18.432 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:18.432 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:18.432 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:18.432 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:18.432 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:18.432 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:18.432 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:18.432 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:18.432 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:18.432 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:18.432 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:18.432 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:18.432 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:18.432 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:18.432 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:18.432 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:18.432 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:18.432 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:18.432 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:18.432 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:18.433 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:18.433 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:18.433 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:18.433 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:18.433 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:18.433 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:18.433 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:18.433 19:59:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:18.433 19:59:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@731 -- # xtrace_disable 00:30:18.433 19:59:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.433 19:59:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:18.433 19:59:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@725 -- # xtrace_disable 00:30:18.433 19:59:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.433 19:59:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:18.433 19:59:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:18.690 19:59:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:18.948 19:59:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:18.948 19:59:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:18.948 19:59:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@731 -- # xtrace_disable 00:30:18.948 19:59:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.948 19:59:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:18.948 19:59:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@725 -- # xtrace_disable 00:30:18.948 19:59:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:18.948 19:59:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:18.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:18.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:18.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:18.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:18.948 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:18.948 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:18.948 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:18.948 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:18.948 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:18.948 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:18.948 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:18.948 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:18.948 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:18.948 ' 00:30:24.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:24.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:24.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:24.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:24.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:24.207 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:24.207 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:24.207 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:24.207 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:24.207 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:24.207 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:24.207 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:24.207 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:24.207 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:24.207 19:59:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:24.207 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@731 -- # xtrace_disable 00:30:24.207 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:24.207 19:59:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1337484 00:30:24.207 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' -z 1337484 ']' 00:30:24.207 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # kill -0 1337484 00:30:24.207 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # uname 00:30:24.207 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:30:24.207 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1337484 00:30:24.207 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:30:24.207 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:30:24.207 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1337484' 00:30:24.207 killing process with pid 1337484 00:30:24.207 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # kill 1337484 00:30:24.207 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # wait 1337484 00:30:24.466 19:59:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:24.466 19:59:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:24.466 19:59:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1337484 ']' 00:30:24.466 19:59:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1337484 00:30:24.466 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' -z 1337484 ']' 00:30:24.466 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # kill -0 1337484 00:30:24.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 955: kill: (1337484) - No such process 00:30:24.466 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # echo 'Process with pid 1337484 is not found' 00:30:24.466 Process with pid 1337484 is not found 00:30:24.466 19:59:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:24.466 19:59:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:24.466 19:59:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:24.466 00:30:24.466 real 0m16.041s 00:30:24.466 user 0m33.923s 00:30:24.466 sys 0m0.823s 00:30:24.466 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # xtrace_disable 00:30:24.466 19:59:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:24.466 ************************************ 00:30:24.466 END TEST spdkcli_nvmf_tcp 00:30:24.466 ************************************ 00:30:24.466 19:59:41 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:24.466 19:59:41 -- common/autotest_common.sh@1102 -- # '[' 3 -le 1 ']' 00:30:24.466 19:59:41 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:30:24.466 19:59:41 -- common/autotest_common.sh@10 -- # set +x 00:30:24.466 ************************************ 00:30:24.466 START TEST nvmf_identify_passthru 00:30:24.466 ************************************ 00:30:24.466 19:59:41 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:24.466 * Looking for test storage... 00:30:24.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:24.466 19:59:41 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.466 19:59:41 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.466 19:59:41 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.466 19:59:41 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.466 19:59:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.466 19:59:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.466 19:59:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.466 19:59:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:24.466 19:59:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:24.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:24.466 19:59:41 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.466 19:59:41 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.466 19:59:41 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.466 19:59:41 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.466 19:59:41 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.466 19:59:41 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.466 19:59:41 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.466 19:59:41 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:24.466 19:59:41 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.466 19:59:41 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@452 -- # prepare_net_devs 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@414 -- # local -g is_hw=no 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@416 -- # remove_spdk_ns 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.466 19:59:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:24.466 19:59:41 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:30:24.466 19:59:41 nvmf_identify_passthru -- nvmf/common.sh@289 -- # xtrace_disable 00:30:24.466 19:59:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@295 -- # pci_devs=() 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -a pci_devs 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@296 -- # pci_net_devs=() 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@297 -- # pci_drivers=() 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -A pci_drivers 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@299 -- # net_devs=() 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@299 -- # local -ga net_devs 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@300 -- # e810=() 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@300 -- # local -ga e810 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@301 -- # x722=() 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@301 -- # local -ga x722 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@302 -- # mlx=() 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@302 -- # local -ga mlx 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:26.365 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:26.365 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@394 -- # [[ up == up ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:26.365 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@394 -- # [[ up == up ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:26.365 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@418 -- # is_hw=yes 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:30:26.365 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.623 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.623 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.623 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:30:26.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:30:26.623 00:30:26.623 --- 10.0.0.2 ping statistics --- 00:30:26.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.623 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:30:26.623 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:30:26.623 00:30:26.623 --- 10.0.0.1 ping statistics --- 00:30:26.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.623 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:30:26.623 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.623 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@426 -- # return 0 00:30:26.623 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@454 -- # '[' '' == iso ']' 00:30:26.623 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.623 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:30:26.623 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:30:26.623 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.623 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:30:26.623 19:59:43 nvmf_identify_passthru -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:30:26.623 19:59:43 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:26.623 19:59:43 nvmf_identify_passthru -- common/autotest_common.sh@725 -- # xtrace_disable 00:30:26.623 19:59:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:26.623 19:59:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:26.623 19:59:43 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=() 00:30:26.623 19:59:43 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # local bdfs 00:30:26.623 19:59:43 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # bdfs=($(get_nvme_bdfs)) 00:30:26.623 19:59:43 nvmf_identify_passthru -- common/autotest_common.sh@1526 -- # get_nvme_bdfs 00:30:26.623 19:59:43 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=() 00:30:26.623 19:59:43 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # local bdfs 00:30:26.623 19:59:43 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:26.623 19:59:43 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:26.623 19:59:43 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # jq -r '.config[].params.traddr' 00:30:26.623 19:59:43 nvmf_identify_passthru -- common/autotest_common.sh@1516 -- # (( 1 == 0 )) 00:30:26.623 19:59:43 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # printf '%s\n' 0000:88:00.0 00:30:26.623 19:59:43 nvmf_identify_passthru -- common/autotest_common.sh@1528 -- # echo 0000:88:00.0 00:30:26.623 19:59:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:30:26.623 19:59:43 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:30:26.623 19:59:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:26.623 19:59:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:26.623 19:59:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:26.623 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.803 19:59:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:30:30.803 19:59:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:30.803 19:59:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:30.803 19:59:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:30.803 EAL: No free 2048 kB hugepages reported on node 1 00:30:34.984 19:59:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:34.984 19:59:52 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:34.984 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@731 -- # xtrace_disable 00:30:34.984 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:34.984 19:59:52 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:34.984 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@725 -- # xtrace_disable 00:30:34.984 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:34.984 19:59:52 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1342067 00:30:34.984 19:59:52 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:34.984 19:59:52 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:34.984 19:59:52 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1342067 00:30:34.984 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # '[' -z 1342067 ']' 00:30:34.984 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.984 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local max_retries=100 00:30:34.984 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.984 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@841 -- # xtrace_disable 00:30:34.984 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:34.984 [2024-07-24 19:59:52.301878] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:30:34.984 [2024-07-24 19:59:52.301955] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.984 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.242 [2024-07-24 19:59:52.367161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:35.242 [2024-07-24 19:59:52.473801] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:35.242 [2024-07-24 19:59:52.473852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:35.242 [2024-07-24 19:59:52.473882] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:35.242 [2024-07-24 19:59:52.473893] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:35.242 [2024-07-24 19:59:52.473903] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:35.242 [2024-07-24 19:59:52.473985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.242 [2024-07-24 19:59:52.474051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:35.242 [2024-07-24 19:59:52.474117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:35.242 [2024-07-24 19:59:52.474120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.242 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:30:35.242 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@865 -- # return 0 00:30:35.242 19:59:52 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:35.242 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:35.242 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.242 INFO: Log level set to 20 00:30:35.242 INFO: Requests: 00:30:35.242 { 00:30:35.242 "jsonrpc": "2.0", 00:30:35.242 "method": "nvmf_set_config", 00:30:35.242 "id": 1, 00:30:35.242 "params": { 00:30:35.242 "admin_cmd_passthru": { 00:30:35.242 "identify_ctrlr": true 00:30:35.242 } 00:30:35.242 } 00:30:35.242 } 00:30:35.242 00:30:35.242 INFO: response: 00:30:35.242 { 00:30:35.242 "jsonrpc": "2.0", 00:30:35.242 "id": 1, 00:30:35.242 "result": true 00:30:35.242 } 00:30:35.242 00:30:35.242 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:35.242 19:59:52 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:35.242 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:35.242 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.242 INFO: Setting log level to 20 00:30:35.242 INFO: Setting log level to 20 00:30:35.242 INFO: Log level set to 20 00:30:35.242 INFO: Log level set to 20 00:30:35.242 INFO: Requests: 00:30:35.242 { 00:30:35.242 "jsonrpc": "2.0", 00:30:35.242 "method": "framework_start_init", 00:30:35.242 "id": 1 00:30:35.242 } 00:30:35.242 00:30:35.242 INFO: Requests: 00:30:35.242 { 00:30:35.242 "jsonrpc": "2.0", 00:30:35.242 "method": "framework_start_init", 00:30:35.242 "id": 1 00:30:35.242 } 00:30:35.242 00:30:35.242 [2024-07-24 19:59:52.615628] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:35.500 INFO: response: 00:30:35.500 { 00:30:35.500 "jsonrpc": "2.0", 00:30:35.500 "id": 1, 00:30:35.500 "result": true 00:30:35.500 } 00:30:35.500 00:30:35.500 INFO: response: 00:30:35.500 { 00:30:35.500 "jsonrpc": "2.0", 00:30:35.500 "id": 1, 00:30:35.500 "result": true 00:30:35.500 } 00:30:35.500 00:30:35.500 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:35.500 19:59:52 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:35.500 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:35.500 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.500 INFO: Setting log level to 40 00:30:35.500 INFO: Setting log level to 40 00:30:35.500 INFO: Setting log level to 40 00:30:35.500 [2024-07-24 19:59:52.625764] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.500 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:35.500 19:59:52 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:35.500 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@731 -- # xtrace_disable 00:30:35.500 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:35.500 19:59:52 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:30:35.500 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:35.500 19:59:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:38.779 Nvme0n1 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:38.779 [2024-07-24 19:59:55.521891] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:38.779 [ 00:30:38.779 { 00:30:38.779 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:38.779 "subtype": "Discovery", 00:30:38.779 "listen_addresses": [], 00:30:38.779 "allow_any_host": true, 00:30:38.779 "hosts": [] 00:30:38.779 }, 00:30:38.779 { 00:30:38.779 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:38.779 "subtype": "NVMe", 00:30:38.779 "listen_addresses": [ 00:30:38.779 { 00:30:38.779 "trtype": "TCP", 00:30:38.779 "adrfam": "IPv4", 00:30:38.779 "traddr": "10.0.0.2", 00:30:38.779 "trsvcid": "4420" 00:30:38.779 } 00:30:38.779 ], 00:30:38.779 "allow_any_host": true, 00:30:38.779 "hosts": [], 00:30:38.779 "serial_number": "SPDK00000000000001", 00:30:38.779 "model_number": "SPDK bdev Controller", 00:30:38.779 "max_namespaces": 1, 00:30:38.779 "min_cntlid": 1, 00:30:38.779 "max_cntlid": 65519, 00:30:38.779 "namespaces": [ 00:30:38.779 { 00:30:38.779 "nsid": 1, 00:30:38.779 "bdev_name": "Nvme0n1", 00:30:38.779 "name": "Nvme0n1", 00:30:38.779 "nguid": "DB1BB05B8E674FCBAC9359EE5FDC1626", 00:30:38.779 "uuid": "db1bb05b-8e67-4fcb-ac93-59ee5fdc1626" 00:30:38.779 } 00:30:38.779 ] 00:30:38.779 } 00:30:38.779 ] 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:38.779 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:38.779 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:38.779 19:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:38.779 19:59:55 nvmf_identify_passthru -- nvmf/common.sh@492 -- # nvmfcleanup 00:30:38.779 19:59:55 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:30:38.779 19:59:55 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:38.779 19:59:55 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:30:38.779 19:59:55 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:38.779 19:59:55 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:38.779 rmmod nvme_tcp 00:30:38.779 rmmod nvme_fabrics 00:30:38.779 rmmod nvme_keyring 00:30:38.779 19:59:55 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:38.779 19:59:55 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:30:38.779 19:59:55 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:30:38.779 19:59:55 nvmf_identify_passthru -- nvmf/common.sh@493 -- # '[' -n 1342067 ']' 00:30:38.779 19:59:55 nvmf_identify_passthru -- nvmf/common.sh@494 -- # killprocess 1342067 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' -z 1342067 ']' 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # kill -0 1342067 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # uname 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1342067 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1342067' 00:30:38.779 killing process with pid 1342067 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # kill 1342067 00:30:38.779 19:59:55 nvmf_identify_passthru -- common/autotest_common.sh@975 -- # wait 1342067 00:30:40.712 19:59:57 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' '' == iso ']' 00:30:40.712 19:59:57 nvmf_identify_passthru -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:30:40.712 19:59:57 nvmf_identify_passthru -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:30:40.712 19:59:57 nvmf_identify_passthru -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:40.712 19:59:57 nvmf_identify_passthru -- nvmf/common.sh@282 -- # remove_spdk_ns 00:30:40.712 19:59:57 nvmf_identify_passthru -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.712 19:59:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:40.712 19:59:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.615 19:59:59 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:30:42.615 00:30:42.615 real 0m17.972s 00:30:42.615 user 0m26.751s 00:30:42.615 sys 0m2.280s 00:30:42.615 19:59:59 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # xtrace_disable 00:30:42.615 19:59:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:42.615 ************************************ 00:30:42.615 END TEST nvmf_identify_passthru 00:30:42.615 ************************************ 00:30:42.615 19:59:59 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:42.615 19:59:59 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:30:42.615 19:59:59 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:30:42.615 19:59:59 -- common/autotest_common.sh@10 -- # set +x 00:30:42.615 ************************************ 00:30:42.615 START TEST nvmf_dif 00:30:42.615 ************************************ 00:30:42.615 19:59:59 nvmf_dif -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:42.615 * Looking for test storage... 00:30:42.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:42.615 19:59:59 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.615 19:59:59 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.615 19:59:59 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.615 19:59:59 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.615 19:59:59 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.615 19:59:59 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.615 19:59:59 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.615 19:59:59 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:42.615 19:59:59 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:42.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:42.615 19:59:59 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:42.615 19:59:59 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:42.616 19:59:59 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:42.616 19:59:59 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:42.616 19:59:59 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:42.616 19:59:59 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:42.616 19:59:59 nvmf_dif -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:30:42.616 19:59:59 nvmf_dif -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.616 19:59:59 nvmf_dif -- nvmf/common.sh@452 -- # prepare_net_devs 00:30:42.616 19:59:59 nvmf_dif -- nvmf/common.sh@414 -- # local -g is_hw=no 00:30:42.616 19:59:59 nvmf_dif -- nvmf/common.sh@416 -- # remove_spdk_ns 00:30:42.616 19:59:59 nvmf_dif -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.616 19:59:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:42.616 19:59:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.616 19:59:59 nvmf_dif -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:30:42.616 19:59:59 nvmf_dif -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:30:42.616 19:59:59 nvmf_dif -- nvmf/common.sh@289 -- # xtrace_disable 00:30:42.616 19:59:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@295 -- # pci_devs=() 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@295 -- # local -a pci_devs 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@296 -- # pci_net_devs=() 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@297 -- # pci_drivers=() 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@297 -- # local -A pci_drivers 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@299 -- # net_devs=() 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@299 -- # local -ga net_devs 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@300 -- # e810=() 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@300 -- # local -ga e810 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@301 -- # x722=() 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@301 -- # local -ga x722 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@302 -- # mlx=() 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@302 -- # local -ga mlx 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:44.515 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:44.515 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@394 -- # [[ up == up ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:44.515 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@394 -- # [[ up == up ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:44.515 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@418 -- # is_hw=yes 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:30:44.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:44.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:30:44.515 00:30:44.515 --- 10.0.0.2 ping statistics --- 00:30:44.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.515 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:44.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:44.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:30:44.515 00:30:44.515 --- 10.0.0.1 ping statistics --- 00:30:44.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:44.515 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@426 -- # return 0 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@454 -- # '[' iso == iso ']' 00:30:44.515 20:00:01 nvmf_dif -- nvmf/common.sh@455 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:45.443 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:45.443 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:45.443 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:45.443 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:45.443 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:45.443 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:45.443 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:45.443 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:45.443 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:45.443 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:45.443 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:45.443 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:45.701 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:45.701 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:45.701 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:45.701 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:45.701 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:45.701 20:00:03 nvmf_dif -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:45.701 20:00:03 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:30:45.701 20:00:03 nvmf_dif -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:30:45.701 20:00:03 nvmf_dif -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:45.701 20:00:03 nvmf_dif -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:30:45.701 20:00:03 nvmf_dif -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:30:45.701 20:00:03 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:45.701 20:00:03 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:45.701 20:00:03 nvmf_dif -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:30:45.701 20:00:03 nvmf_dif -- common/autotest_common.sh@725 -- # xtrace_disable 00:30:45.701 20:00:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:45.701 20:00:03 nvmf_dif -- nvmf/common.sh@485 -- # nvmfpid=1345353 00:30:45.701 20:00:03 nvmf_dif -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:45.701 20:00:03 nvmf_dif -- nvmf/common.sh@486 -- # waitforlisten 1345353 00:30:45.701 20:00:03 nvmf_dif -- common/autotest_common.sh@832 -- # '[' -z 1345353 ']' 00:30:45.701 20:00:03 nvmf_dif -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:45.701 20:00:03 nvmf_dif -- common/autotest_common.sh@837 -- # local max_retries=100 00:30:45.701 20:00:03 nvmf_dif -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:45.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:45.701 20:00:03 nvmf_dif -- common/autotest_common.sh@841 -- # xtrace_disable 00:30:45.701 20:00:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:45.701 [2024-07-24 20:00:03.074723] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:30:45.701 [2024-07-24 20:00:03.074794] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.959 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.959 [2024-07-24 20:00:03.140783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.959 [2024-07-24 20:00:03.257368] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.959 [2024-07-24 20:00:03.257431] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.959 [2024-07-24 20:00:03.257459] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.959 [2024-07-24 20:00:03.257473] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.959 [2024-07-24 20:00:03.257485] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.959 [2024-07-24 20:00:03.257514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.216 20:00:03 nvmf_dif -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:30:46.216 20:00:03 nvmf_dif -- common/autotest_common.sh@865 -- # return 0 00:30:46.216 20:00:03 nvmf_dif -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:30:46.216 20:00:03 nvmf_dif -- common/autotest_common.sh@731 -- # xtrace_disable 00:30:46.216 20:00:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:46.216 20:00:03 nvmf_dif -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:46.217 20:00:03 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:46.217 20:00:03 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:46.217 20:00:03 nvmf_dif -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:46.217 20:00:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:46.217 [2024-07-24 20:00:03.400440] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.217 20:00:03 nvmf_dif -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:46.217 20:00:03 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:46.217 20:00:03 nvmf_dif -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:30:46.217 20:00:03 nvmf_dif -- common/autotest_common.sh@1108 -- # xtrace_disable 00:30:46.217 20:00:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:46.217 ************************************ 00:30:46.217 START TEST fio_dif_1_default 00:30:46.217 ************************************ 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # fio_dif_1 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:46.217 bdev_null0 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:46.217 [2024-07-24 20:00:03.456754] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@536 -- # config=() 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@536 -- # local subsystem config 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:30:46.217 { 00:30:46.217 "params": { 00:30:46.217 "name": "Nvme$subsystem", 00:30:46.217 "trtype": "$TEST_TRANSPORT", 00:30:46.217 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:46.217 "adrfam": "ipv4", 00:30:46.217 "trsvcid": "$NVMF_PORT", 00:30:46.217 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:46.217 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:46.217 "hdgst": ${hdgst:-false}, 00:30:46.217 "ddgst": ${ddgst:-false} 00:30:46.217 }, 00:30:46.217 "method": "bdev_nvme_attach_controller" 00:30:46.217 } 00:30:46.217 EOF 00:30:46.217 )") 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local fio_dir=/usr/src/fio 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local sanitizers 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # shift 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local asan_lib= 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # cat 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # grep libasan 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # jq . 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@561 -- # IFS=, 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:30:46.217 "params": { 00:30:46.217 "name": "Nvme0", 00:30:46.217 "trtype": "tcp", 00:30:46.217 "traddr": "10.0.0.2", 00:30:46.217 "adrfam": "ipv4", 00:30:46.217 "trsvcid": "4420", 00:30:46.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:46.217 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:46.217 "hdgst": false, 00:30:46.217 "ddgst": false 00:30:46.217 }, 00:30:46.217 "method": "bdev_nvme_attach_controller" 00:30:46.217 }' 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # asan_lib= 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # grep libclang_rt.asan 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # asan_lib= 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1353 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:46.217 20:00:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1353 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.481 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:46.481 fio-3.35 00:30:46.481 Starting 1 thread 00:30:46.481 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.674 00:30:58.674 filename0: (groupid=0, jobs=1): err= 0: pid=1345580: Wed Jul 24 20:00:14 2024 00:30:58.674 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10009msec) 00:30:58.674 slat (nsec): min=4328, max=51853, avg=8691.52, stdev=2908.84 00:30:58.674 clat (usec): min=40888, max=45024, avg=40989.58, stdev=259.06 00:30:58.674 lat (usec): min=40896, max=45038, avg=40998.28, stdev=258.99 00:30:58.674 clat percentiles (usec): 00:30:58.674 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:58.674 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:30:58.674 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:30:58.674 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44827], 99.95th=[44827], 00:30:58.674 | 99.99th=[44827] 00:30:58.674 bw ( KiB/s): min= 384, max= 416, per=99.47%, avg=388.80, stdev=11.72, samples=20 00:30:58.674 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:30:58.674 lat (msec) : 50=100.00% 00:30:58.674 cpu : usr=89.48%, sys=10.24%, ctx=14, majf=0, minf=224 00:30:58.674 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.674 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.674 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.674 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.674 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:58.674 00:30:58.674 Run status group 0 (all jobs): 00:30:58.674 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10009-10009msec 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:58.674 00:30:58.674 real 0m11.349s 00:30:58.674 user 0m10.279s 00:30:58.674 sys 0m1.315s 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # xtrace_disable 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:58.674 ************************************ 00:30:58.674 END TEST fio_dif_1_default 00:30:58.674 ************************************ 00:30:58.674 20:00:14 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:58.674 20:00:14 nvmf_dif -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:30:58.674 20:00:14 nvmf_dif -- common/autotest_common.sh@1108 -- # xtrace_disable 00:30:58.674 20:00:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:58.674 ************************************ 00:30:58.674 START TEST fio_dif_1_multi_subsystems 00:30:58.674 ************************************ 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # fio_dif_1_multi_subsystems 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.674 bdev_null0 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.674 [2024-07-24 20:00:14.852944] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.674 bdev_null1 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@562 -- # xtrace_disable 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@536 -- # config=() 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@536 -- # local subsystem config 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:30:58.674 { 00:30:58.674 "params": { 00:30:58.674 "name": "Nvme$subsystem", 00:30:58.674 "trtype": "$TEST_TRANSPORT", 00:30:58.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.674 "adrfam": "ipv4", 00:30:58.674 "trsvcid": "$NVMF_PORT", 00:30:58.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.674 "hdgst": ${hdgst:-false}, 00:30:58.674 "ddgst": ${ddgst:-false} 00:30:58.674 }, 00:30:58.674 "method": "bdev_nvme_attach_controller" 00:30:58.674 } 00:30:58.674 EOF 00:30:58.674 )") 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local fio_dir=/usr/src/fio 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local sanitizers 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # shift 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local asan_lib= 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # cat 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # grep libasan 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:30:58.674 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:30:58.674 { 00:30:58.674 "params": { 00:30:58.674 "name": "Nvme$subsystem", 00:30:58.675 "trtype": "$TEST_TRANSPORT", 00:30:58.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.675 "adrfam": "ipv4", 00:30:58.675 "trsvcid": "$NVMF_PORT", 00:30:58.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.675 "hdgst": ${hdgst:-false}, 00:30:58.675 "ddgst": ${ddgst:-false} 00:30:58.675 }, 00:30:58.675 "method": "bdev_nvme_attach_controller" 00:30:58.675 } 00:30:58.675 EOF 00:30:58.675 )") 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # cat 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # jq . 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@561 -- # IFS=, 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:30:58.675 "params": { 00:30:58.675 "name": "Nvme0", 00:30:58.675 "trtype": "tcp", 00:30:58.675 "traddr": "10.0.0.2", 00:30:58.675 "adrfam": "ipv4", 00:30:58.675 "trsvcid": "4420", 00:30:58.675 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:58.675 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:58.675 "hdgst": false, 00:30:58.675 "ddgst": false 00:30:58.675 }, 00:30:58.675 "method": "bdev_nvme_attach_controller" 00:30:58.675 },{ 00:30:58.675 "params": { 00:30:58.675 "name": "Nvme1", 00:30:58.675 "trtype": "tcp", 00:30:58.675 "traddr": "10.0.0.2", 00:30:58.675 "adrfam": "ipv4", 00:30:58.675 "trsvcid": "4420", 00:30:58.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:58.675 "hdgst": false, 00:30:58.675 "ddgst": false 00:30:58.675 }, 00:30:58.675 "method": "bdev_nvme_attach_controller" 00:30:58.675 }' 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # asan_lib= 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # grep libclang_rt.asan 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # asan_lib= 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1353 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:58.675 20:00:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1353 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.675 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:58.675 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:58.675 fio-3.35 00:30:58.675 Starting 2 threads 00:30:58.675 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.636 00:31:08.636 filename0: (groupid=0, jobs=1): err= 0: pid=1347491: Wed Jul 24 20:00:25 2024 00:31:08.636 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10039msec) 00:31:08.636 slat (nsec): min=4996, max=45849, avg=10621.66, stdev=3262.81 00:31:08.636 clat (usec): min=40894, max=44436, avg=41794.57, stdev=425.20 00:31:08.636 lat (usec): min=40908, max=44452, avg=41805.19, stdev=425.24 00:31:08.636 clat percentiles (usec): 00:31:08.636 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:31:08.636 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:08.636 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:08.636 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:31:08.636 | 99.99th=[44303] 00:31:08.636 bw ( KiB/s): min= 352, max= 384, per=49.63%, avg=382.40, stdev= 7.16, samples=20 00:31:08.636 iops : min= 88, max= 96, avg=95.60, stdev= 1.79, samples=20 00:31:08.636 lat (msec) : 50=100.00% 00:31:08.636 cpu : usr=94.01%, sys=5.51%, ctx=25, majf=0, minf=189 00:31:08.636 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:08.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.636 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.636 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:08.636 filename1: (groupid=0, jobs=1): err= 0: pid=1347492: Wed Jul 24 20:00:25 2024 00:31:08.636 read: IOPS=96, BW=387KiB/s (397kB/s)(3888KiB/10040msec) 00:31:08.636 slat (nsec): min=5240, max=29937, avg=10487.38, stdev=3068.49 00:31:08.636 clat (usec): min=40871, max=44501, avg=41282.62, stdev=500.94 00:31:08.636 lat (usec): min=40879, max=44516, avg=41293.11, stdev=502.03 00:31:08.636 clat percentiles (usec): 00:31:08.636 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:08.636 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:08.636 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:31:08.636 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:31:08.636 | 99.99th=[44303] 00:31:08.636 bw ( KiB/s): min= 351, max= 416, per=50.28%, avg=387.15, stdev=14.44, samples=20 00:31:08.636 iops : min= 87, max= 104, avg=96.75, stdev= 3.71, samples=20 00:31:08.636 lat (msec) : 50=100.00% 00:31:08.636 cpu : usr=94.47%, sys=5.26%, ctx=10, majf=0, minf=98 00:31:08.636 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:08.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.636 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.636 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:08.636 00:31:08.636 Run status group 0 (all jobs): 00:31:08.636 READ: bw=770KiB/s (788kB/s), 383KiB/s-387KiB/s (392kB/s-397kB/s), io=7728KiB (7913kB), run=10039-10040msec 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:08.894 00:31:08.894 real 0m11.379s 00:31:08.894 user 0m20.239s 00:31:08.894 sys 0m1.365s 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # xtrace_disable 00:31:08.894 20:00:26 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:08.894 ************************************ 00:31:08.894 END TEST fio_dif_1_multi_subsystems 00:31:08.894 ************************************ 00:31:08.894 20:00:26 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:08.894 20:00:26 nvmf_dif -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:31:08.894 20:00:26 nvmf_dif -- common/autotest_common.sh@1108 -- # xtrace_disable 00:31:08.894 20:00:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:08.894 ************************************ 00:31:08.894 START TEST fio_dif_rand_params 00:31:08.894 ************************************ 00:31:08.894 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # fio_dif_rand_params 00:31:08.894 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:08.894 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:08.894 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:08.894 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:08.894 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:08.894 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.895 bdev_null0 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:08.895 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:09.159 [2024-07-24 20:00:26.282408] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@536 -- # config=() 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@536 -- # local subsystem config 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:31:09.159 { 00:31:09.159 "params": { 00:31:09.159 "name": "Nvme$subsystem", 00:31:09.159 "trtype": "$TEST_TRANSPORT", 00:31:09.159 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:09.159 "adrfam": "ipv4", 00:31:09.159 "trsvcid": "$NVMF_PORT", 00:31:09.159 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:09.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:09.159 "hdgst": ${hdgst:-false}, 00:31:09.159 "ddgst": ${ddgst:-false} 00:31:09.159 }, 00:31:09.159 "method": "bdev_nvme_attach_controller" 00:31:09.159 } 00:31:09.159 EOF 00:31:09.159 )") 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local fio_dir=/usr/src/fio 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local sanitizers 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # shift 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local asan_lib= 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # cat 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # grep libasan 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # jq . 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@561 -- # IFS=, 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:31:09.159 "params": { 00:31:09.159 "name": "Nvme0", 00:31:09.159 "trtype": "tcp", 00:31:09.159 "traddr": "10.0.0.2", 00:31:09.159 "adrfam": "ipv4", 00:31:09.159 "trsvcid": "4420", 00:31:09.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:09.159 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:09.159 "hdgst": false, 00:31:09.159 "ddgst": false 00:31:09.159 }, 00:31:09.159 "method": "bdev_nvme_attach_controller" 00:31:09.159 }' 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # asan_lib= 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # grep libclang_rt.asan 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # asan_lib= 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:09.159 20:00:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.417 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:09.417 ... 00:31:09.417 fio-3.35 00:31:09.417 Starting 3 threads 00:31:09.417 EAL: No free 2048 kB hugepages reported on node 1 00:31:15.998 00:31:15.998 filename0: (groupid=0, jobs=1): err= 0: pid=1348887: Wed Jul 24 20:00:32 2024 00:31:15.998 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(133MiB/5045msec) 00:31:15.998 slat (nsec): min=5859, max=32558, avg=14360.02, stdev=2547.35 00:31:15.998 clat (usec): min=4548, max=87981, avg=14154.21, stdev=12534.80 00:31:15.998 lat (usec): min=4561, max=87995, avg=14168.57, stdev=12534.59 00:31:15.998 clat percentiles (usec): 00:31:15.998 | 1.00th=[ 5014], 5.00th=[ 5407], 10.00th=[ 7177], 20.00th=[ 8291], 00:31:15.998 | 30.00th=[ 8717], 40.00th=[ 9634], 50.00th=[11076], 60.00th=[11863], 00:31:15.998 | 70.00th=[12518], 80.00th=[13042], 90.00th=[15008], 95.00th=[50070], 00:31:15.998 | 99.00th=[53740], 99.50th=[54789], 99.90th=[87557], 99.95th=[87557], 00:31:15.998 | 99.99th=[87557] 00:31:15.998 bw ( KiB/s): min=16640, max=43094, per=32.31%, avg=27195.80, stdev=8084.28, samples=10 00:31:15.998 iops : min= 130, max= 336, avg=212.40, stdev=63.01, samples=10 00:31:15.998 lat (msec) : 10=42.72%, 20=47.70%, 50=4.79%, 100=4.79% 00:31:15.998 cpu : usr=91.67%, sys=7.34%, ctx=73, majf=0, minf=77 00:31:15.998 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:15.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.998 issued rwts: total=1065,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.998 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:15.998 filename0: (groupid=0, jobs=1): err= 0: pid=1348888: Wed Jul 24 20:00:32 2024 00:31:15.998 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(151MiB/5004msec) 00:31:15.998 slat (nsec): min=4978, max=64648, avg=12990.72, stdev=2062.76 00:31:15.998 clat (usec): min=4486, max=89118, avg=12387.79, stdev=10561.29 00:31:15.998 lat (usec): min=4498, max=89131, avg=12400.78, stdev=10561.29 00:31:15.998 clat percentiles (usec): 00:31:15.998 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5604], 20.00th=[ 7701], 00:31:15.998 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[ 9634], 60.00th=[11207], 00:31:15.998 | 70.00th=[12125], 80.00th=[12911], 90.00th=[14746], 95.00th=[48497], 00:31:15.998 | 99.00th=[53740], 99.50th=[54789], 99.90th=[87557], 99.95th=[88605], 00:31:15.998 | 99.99th=[88605] 00:31:15.998 bw ( KiB/s): min=24832, max=44800, per=36.75%, avg=30924.80, stdev=7143.37, samples=10 00:31:15.998 iops : min= 194, max= 350, avg=241.60, stdev=55.81, samples=10 00:31:15.998 lat (msec) : 10=52.64%, 20=41.40%, 50=2.48%, 100=3.47% 00:31:15.998 cpu : usr=92.56%, sys=6.94%, ctx=43, majf=0, minf=138 00:31:15.998 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:15.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.998 issued rwts: total=1210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.998 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:15.998 filename0: (groupid=0, jobs=1): err= 0: pid=1348889: Wed Jul 24 20:00:32 2024 00:31:15.998 read: IOPS=208, BW=26.0MiB/s (27.3MB/s)(130MiB/5006msec) 00:31:15.998 slat (nsec): min=5225, max=42121, avg=13129.99, stdev=1815.06 00:31:15.998 clat (usec): min=4597, max=54089, avg=14392.12, stdev=12909.07 00:31:15.998 lat (usec): min=4610, max=54103, avg=14405.25, stdev=12908.95 00:31:15.998 clat percentiles (usec): 00:31:15.998 | 1.00th=[ 5145], 5.00th=[ 5604], 10.00th=[ 6915], 20.00th=[ 8029], 00:31:15.998 | 30.00th=[ 8455], 40.00th=[ 9765], 50.00th=[10814], 60.00th=[11469], 00:31:15.998 | 70.00th=[11994], 80.00th=[12518], 90.00th=[46924], 95.00th=[50594], 00:31:15.998 | 99.00th=[53216], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 00:31:15.998 | 99.99th=[54264] 00:31:15.998 bw ( KiB/s): min=18432, max=34048, per=31.60%, avg=26598.40, stdev=6127.92, samples=10 00:31:15.998 iops : min= 144, max= 266, avg=207.80, stdev=47.87, samples=10 00:31:15.998 lat (msec) : 10=41.27%, 20=47.50%, 50=5.28%, 100=5.95% 00:31:15.998 cpu : usr=92.69%, sys=6.75%, ctx=10, majf=0, minf=62 00:31:15.998 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:15.998 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.998 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:15.998 issued rwts: total=1042,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:15.998 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:15.998 00:31:15.998 Run status group 0 (all jobs): 00:31:15.998 READ: bw=82.2MiB/s (86.2MB/s), 26.0MiB/s-30.2MiB/s (27.3MB/s-31.7MB/s), io=415MiB (435MB), run=5004-5045msec 00:31:15.998 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:15.998 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:15.998 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:15.998 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:15.998 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:15.998 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:15.998 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:15.998 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.998 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:15.998 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:15.998 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:15.998 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.998 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:15.998 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.999 bdev_null0 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.999 [2024-07-24 20:00:32.442233] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.999 bdev_null1 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.999 bdev_null2 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:15.999 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@536 -- # config=() 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@536 -- # local subsystem config 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:31:16.000 { 00:31:16.000 "params": { 00:31:16.000 "name": "Nvme$subsystem", 00:31:16.000 "trtype": "$TEST_TRANSPORT", 00:31:16.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.000 "adrfam": "ipv4", 00:31:16.000 "trsvcid": "$NVMF_PORT", 00:31:16.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.000 "hdgst": ${hdgst:-false}, 00:31:16.000 "ddgst": ${ddgst:-false} 00:31:16.000 }, 00:31:16.000 "method": "bdev_nvme_attach_controller" 00:31:16.000 } 00:31:16.000 EOF 00:31:16.000 )") 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local fio_dir=/usr/src/fio 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local sanitizers 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # shift 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local asan_lib= 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # cat 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # grep libasan 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:31:16.000 { 00:31:16.000 "params": { 00:31:16.000 "name": "Nvme$subsystem", 00:31:16.000 "trtype": "$TEST_TRANSPORT", 00:31:16.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.000 "adrfam": "ipv4", 00:31:16.000 "trsvcid": "$NVMF_PORT", 00:31:16.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.000 "hdgst": ${hdgst:-false}, 00:31:16.000 "ddgst": ${ddgst:-false} 00:31:16.000 }, 00:31:16.000 "method": "bdev_nvme_attach_controller" 00:31:16.000 } 00:31:16.000 EOF 00:31:16.000 )") 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # cat 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:31:16.000 { 00:31:16.000 "params": { 00:31:16.000 "name": "Nvme$subsystem", 00:31:16.000 "trtype": "$TEST_TRANSPORT", 00:31:16.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.000 "adrfam": "ipv4", 00:31:16.000 "trsvcid": "$NVMF_PORT", 00:31:16.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.000 "hdgst": ${hdgst:-false}, 00:31:16.000 "ddgst": ${ddgst:-false} 00:31:16.000 }, 00:31:16.000 "method": "bdev_nvme_attach_controller" 00:31:16.000 } 00:31:16.000 EOF 00:31:16.000 )") 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # cat 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # jq . 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@561 -- # IFS=, 00:31:16.000 20:00:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:31:16.000 "params": { 00:31:16.000 "name": "Nvme0", 00:31:16.001 "trtype": "tcp", 00:31:16.001 "traddr": "10.0.0.2", 00:31:16.001 "adrfam": "ipv4", 00:31:16.001 "trsvcid": "4420", 00:31:16.001 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:16.001 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:16.001 "hdgst": false, 00:31:16.001 "ddgst": false 00:31:16.001 }, 00:31:16.001 "method": "bdev_nvme_attach_controller" 00:31:16.001 },{ 00:31:16.001 "params": { 00:31:16.001 "name": "Nvme1", 00:31:16.001 "trtype": "tcp", 00:31:16.001 "traddr": "10.0.0.2", 00:31:16.001 "adrfam": "ipv4", 00:31:16.001 "trsvcid": "4420", 00:31:16.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:16.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:16.001 "hdgst": false, 00:31:16.001 "ddgst": false 00:31:16.001 }, 00:31:16.001 "method": "bdev_nvme_attach_controller" 00:31:16.001 },{ 00:31:16.001 "params": { 00:31:16.001 "name": "Nvme2", 00:31:16.001 "trtype": "tcp", 00:31:16.001 "traddr": "10.0.0.2", 00:31:16.001 "adrfam": "ipv4", 00:31:16.001 "trsvcid": "4420", 00:31:16.001 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:16.001 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:16.001 "hdgst": false, 00:31:16.001 "ddgst": false 00:31:16.001 }, 00:31:16.001 "method": "bdev_nvme_attach_controller" 00:31:16.001 }' 00:31:16.001 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # asan_lib= 00:31:16.001 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:31:16.001 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.001 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.001 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # grep libclang_rt.asan 00:31:16.001 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:31:16.001 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # asan_lib= 00:31:16.001 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:31:16.001 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:16.001 20:00:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.001 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:16.001 ... 00:31:16.001 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:16.001 ... 00:31:16.001 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:16.001 ... 00:31:16.001 fio-3.35 00:31:16.001 Starting 24 threads 00:31:16.001 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.233 00:31:28.233 filename0: (groupid=0, jobs=1): err= 0: pid=1349745: Wed Jul 24 20:00:43 2024 00:31:28.233 read: IOPS=316, BW=1266KiB/s (1296kB/s)(12.4MiB/10013msec) 00:31:28.233 slat (nsec): min=5985, max=98878, avg=21822.74, stdev=18510.46 00:31:28.233 clat (msec): min=8, max=321, avg=50.38, stdev=59.38 00:31:28.233 lat (msec): min=9, max=321, avg=50.40, stdev=59.38 00:31:28.233 clat percentiles (msec): 00:31:28.233 | 1.00th=[ 11], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:31:28.233 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.233 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 243], 00:31:28.233 | 99.00th=[ 268], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 321], 00:31:28.233 | 99.99th=[ 321] 00:31:28.233 bw ( KiB/s): min= 256, max= 2052, per=4.23%, avg=1261.00, stdev=792.83, samples=20 00:31:28.233 iops : min= 64, max= 513, avg=315.25, stdev=198.21, samples=20 00:31:28.233 lat (msec) : 10=0.95%, 20=0.57%, 50=90.40%, 250=4.61%, 500=3.47% 00:31:28.233 cpu : usr=97.56%, sys=1.87%, ctx=95, majf=0, minf=25 00:31:28.233 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:28.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.233 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 issued rwts: total=3168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.234 filename0: (groupid=0, jobs=1): err= 0: pid=1349746: Wed Jul 24 20:00:43 2024 00:31:28.234 read: IOPS=314, BW=1256KiB/s (1287kB/s)(12.3MiB/10022msec) 00:31:28.234 slat (nsec): min=8070, max=62473, avg=26518.60, stdev=9883.98 00:31:28.234 clat (msec): min=19, max=404, avg=50.69, stdev=62.89 00:31:28.234 lat (msec): min=19, max=404, avg=50.72, stdev=62.89 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 22], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.234 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.234 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 243], 00:31:28.234 | 99.00th=[ 326], 99.50th=[ 359], 99.90th=[ 405], 99.95th=[ 405], 00:31:28.234 | 99.99th=[ 405] 00:31:28.234 bw ( KiB/s): min= 128, max= 1920, per=4.20%, avg=1252.80, stdev=796.22, samples=20 00:31:28.234 iops : min= 32, max= 480, avg=313.20, stdev=199.06, samples=20 00:31:28.234 lat (msec) : 20=0.25%, 50=91.42%, 100=0.83%, 250=3.88%, 500=3.62% 00:31:28.234 cpu : usr=97.98%, sys=1.43%, ctx=71, majf=0, minf=34 00:31:28.234 IO depths : 1=5.5%, 2=11.4%, 4=23.9%, 8=52.2%, 16=7.0%, 32=0.0%, >=64=0.0% 00:31:28.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 issued rwts: total=3148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.234 filename0: (groupid=0, jobs=1): err= 0: pid=1349747: Wed Jul 24 20:00:43 2024 00:31:28.234 read: IOPS=303, BW=1215KiB/s (1244kB/s)(11.9MiB/10008msec) 00:31:28.234 slat (usec): min=6, max=118, avg=36.52, stdev=13.58 00:31:28.234 clat (msec): min=29, max=504, avg=52.32, stdev=77.31 00:31:28.234 lat (msec): min=29, max=504, avg=52.35, stdev=77.30 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.234 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.234 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 321], 00:31:28.234 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 502], 99.95th=[ 506], 00:31:28.234 | 99.99th=[ 506] 00:31:28.234 bw ( KiB/s): min= 128, max= 1920, per=3.93%, avg=1172.21, stdev=817.55, samples=19 00:31:28.234 iops : min= 32, max= 480, avg=293.05, stdev=204.39, samples=19 00:31:28.234 lat (msec) : 50=93.68%, 100=0.53%, 250=0.07%, 500=5.59%, 750=0.13% 00:31:28.234 cpu : usr=96.77%, sys=2.10%, ctx=138, majf=0, minf=23 00:31:28.234 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:28.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 issued rwts: total=3040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.234 filename0: (groupid=0, jobs=1): err= 0: pid=1349748: Wed Jul 24 20:00:43 2024 00:31:28.234 read: IOPS=306, BW=1228KiB/s (1257kB/s)(12.0MiB/10009msec) 00:31:28.234 slat (usec): min=8, max=143, avg=36.67, stdev=14.14 00:31:28.234 clat (msec): min=29, max=499, avg=51.78, stdev=71.05 00:31:28.234 lat (msec): min=29, max=499, avg=51.82, stdev=71.04 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.234 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.234 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 245], 00:31:28.234 | 99.00th=[ 376], 99.50th=[ 384], 99.90th=[ 498], 99.95th=[ 502], 00:31:28.234 | 99.99th=[ 502] 00:31:28.234 bw ( KiB/s): min= 128, max= 1920, per=4.09%, avg=1222.55, stdev=802.44, samples=20 00:31:28.234 iops : min= 32, max= 480, avg=305.60, stdev=200.58, samples=20 00:31:28.234 lat (msec) : 50=92.71%, 100=0.59%, 250=2.08%, 500=4.62% 00:31:28.234 cpu : usr=98.26%, sys=1.34%, ctx=13, majf=0, minf=33 00:31:28.234 IO depths : 1=6.0%, 2=12.1%, 4=24.8%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:28.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 issued rwts: total=3072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.234 filename0: (groupid=0, jobs=1): err= 0: pid=1349749: Wed Jul 24 20:00:43 2024 00:31:28.234 read: IOPS=306, BW=1227KiB/s (1256kB/s)(12.0MiB/10016msec) 00:31:28.234 slat (usec): min=6, max=111, avg=37.29, stdev=23.08 00:31:28.234 clat (msec): min=20, max=406, avg=51.82, stdev=70.56 00:31:28.234 lat (msec): min=20, max=406, avg=51.86, stdev=70.56 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.234 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.234 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 245], 00:31:28.234 | 99.00th=[ 372], 99.50th=[ 380], 99.90th=[ 384], 99.95th=[ 405], 00:31:28.234 | 99.99th=[ 405] 00:31:28.234 bw ( KiB/s): min= 128, max= 1920, per=4.09%, avg=1221.60, stdev=806.28, samples=20 00:31:28.234 iops : min= 32, max= 480, avg=305.40, stdev=201.57, samples=20 00:31:28.234 lat (msec) : 50=92.64%, 100=0.59%, 250=2.15%, 500=4.62% 00:31:28.234 cpu : usr=96.68%, sys=2.15%, ctx=303, majf=0, minf=28 00:31:28.234 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:28.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 issued rwts: total=3072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.234 filename0: (groupid=0, jobs=1): err= 0: pid=1349750: Wed Jul 24 20:00:43 2024 00:31:28.234 read: IOPS=318, BW=1276KiB/s (1306kB/s)(12.5MiB/10022msec) 00:31:28.234 slat (usec): min=4, max=115, avg=33.63, stdev=16.90 00:31:28.234 clat (msec): min=16, max=455, avg=49.86, stdev=60.65 00:31:28.234 lat (msec): min=16, max=455, avg=49.89, stdev=60.64 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 22], 5.00th=[ 26], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.234 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.234 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 245], 00:31:28.234 | 99.00th=[ 271], 99.50th=[ 321], 99.90th=[ 363], 99.95th=[ 456], 00:31:28.234 | 99.99th=[ 456] 00:31:28.234 bw ( KiB/s): min= 144, max= 1920, per=4.26%, avg=1272.00, stdev=793.40, samples=20 00:31:28.234 iops : min= 36, max= 480, avg=318.00, stdev=198.35, samples=20 00:31:28.234 lat (msec) : 20=0.13%, 50=91.99%, 100=0.19%, 250=4.07%, 500=3.63% 00:31:28.234 cpu : usr=98.09%, sys=1.50%, ctx=14, majf=0, minf=27 00:31:28.234 IO depths : 1=5.4%, 2=10.9%, 4=22.7%, 8=53.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:31:28.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 issued rwts: total=3196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.234 filename0: (groupid=0, jobs=1): err= 0: pid=1349751: Wed Jul 24 20:00:43 2024 00:31:28.234 read: IOPS=314, BW=1258KiB/s (1288kB/s)(12.3MiB/10009msec) 00:31:28.234 slat (nsec): min=8213, max=97470, avg=28090.27, stdev=15118.32 00:31:28.234 clat (msec): min=21, max=321, avg=50.65, stdev=59.99 00:31:28.234 lat (msec): min=21, max=321, avg=50.67, stdev=59.99 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 25], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.234 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.234 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 243], 00:31:28.234 | 99.00th=[ 268], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 321], 00:31:28.234 | 99.99th=[ 321] 00:31:28.234 bw ( KiB/s): min= 144, max= 1920, per=4.20%, avg=1252.80, stdev=783.67, samples=20 00:31:28.234 iops : min= 36, max= 480, avg=313.20, stdev=195.92, samples=20 00:31:28.234 lat (msec) : 50=91.87%, 250=4.38%, 500=3.75% 00:31:28.234 cpu : usr=98.13%, sys=1.47%, ctx=18, majf=0, minf=29 00:31:28.234 IO depths : 1=5.7%, 2=11.6%, 4=24.1%, 8=51.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:28.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 issued rwts: total=3148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.234 filename0: (groupid=0, jobs=1): err= 0: pid=1349752: Wed Jul 24 20:00:43 2024 00:31:28.234 read: IOPS=312, BW=1252KiB/s (1282kB/s)(12.2MiB/10022msec) 00:31:28.234 slat (usec): min=8, max=111, avg=30.03, stdev=17.34 00:31:28.234 clat (msec): min=19, max=357, avg=50.87, stdev=61.55 00:31:28.234 lat (msec): min=19, max=357, avg=50.90, stdev=61.54 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.234 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.234 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 247], 00:31:28.234 | 99.00th=[ 321], 99.50th=[ 355], 99.90th=[ 359], 99.95th=[ 359], 00:31:28.234 | 99.99th=[ 359] 00:31:28.234 bw ( KiB/s): min= 128, max= 1920, per=4.18%, avg=1248.00, stdev=790.00, samples=20 00:31:28.234 iops : min= 32, max= 480, avg=312.00, stdev=197.50, samples=20 00:31:28.234 lat (msec) : 20=0.32%, 50=91.33%, 100=0.70%, 250=3.12%, 500=4.53% 00:31:28.234 cpu : usr=96.45%, sys=2.34%, ctx=227, majf=0, minf=22 00:31:28.234 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:28.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.234 issued rwts: total=3136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.234 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.234 filename1: (groupid=0, jobs=1): err= 0: pid=1349753: Wed Jul 24 20:00:43 2024 00:31:28.234 read: IOPS=304, BW=1218KiB/s (1247kB/s)(11.9MiB/10008msec) 00:31:28.234 slat (nsec): min=8019, max=92799, avg=27949.15, stdev=20712.74 00:31:28.234 clat (msec): min=19, max=522, avg=52.36, stdev=76.49 00:31:28.234 lat (msec): min=19, max=522, avg=52.39, stdev=76.49 00:31:28.234 clat percentiles (msec): 00:31:28.234 | 1.00th=[ 24], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:31:28.234 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.234 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 326], 00:31:28.234 | 99.00th=[ 397], 99.50th=[ 418], 99.90th=[ 418], 99.95th=[ 523], 00:31:28.235 | 99.99th=[ 523] 00:31:28.235 bw ( KiB/s): min= 128, max= 1936, per=3.94%, avg=1175.58, stdev=824.05, samples=19 00:31:28.235 iops : min= 32, max= 484, avg=293.89, stdev=206.01, samples=19 00:31:28.235 lat (msec) : 20=0.20%, 50=92.59%, 100=1.38%, 250=0.72%, 500=5.05% 00:31:28.235 lat (msec) : 750=0.07% 00:31:28.235 cpu : usr=98.32%, sys=1.28%, ctx=15, majf=0, minf=68 00:31:28.235 IO depths : 1=0.5%, 2=3.7%, 4=13.1%, 8=67.9%, 16=14.8%, 32=0.0%, >=64=0.0% 00:31:28.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 complete : 0=0.0%, 4=91.9%, 8=5.2%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 issued rwts: total=3048,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.235 filename1: (groupid=0, jobs=1): err= 0: pid=1349754: Wed Jul 24 20:00:43 2024 00:31:28.235 read: IOPS=317, BW=1271KiB/s (1301kB/s)(12.4MiB/10021msec) 00:31:28.235 slat (usec): min=4, max=111, avg=20.83, stdev=17.63 00:31:28.235 clat (msec): min=4, max=310, avg=50.17, stdev=58.15 00:31:28.235 lat (msec): min=4, max=310, avg=50.19, stdev=58.15 00:31:28.235 clat percentiles (msec): 00:31:28.235 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.235 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.235 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 243], 00:31:28.235 | 99.00th=[ 264], 99.50th=[ 268], 99.90th=[ 268], 99.95th=[ 313], 00:31:28.235 | 99.99th=[ 313] 00:31:28.235 bw ( KiB/s): min= 256, max= 2048, per=4.25%, avg=1267.20, stdev=796.55, samples=20 00:31:28.235 iops : min= 64, max= 512, avg=316.80, stdev=199.14, samples=20 00:31:28.235 lat (msec) : 10=0.66%, 20=0.85%, 50=90.01%, 100=0.44%, 250=4.46% 00:31:28.235 lat (msec) : 500=3.58% 00:31:28.235 cpu : usr=97.74%, sys=1.68%, ctx=95, majf=0, minf=25 00:31:28.235 IO depths : 1=5.6%, 2=11.8%, 4=24.7%, 8=50.9%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:28.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 issued rwts: total=3184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.235 filename1: (groupid=0, jobs=1): err= 0: pid=1349755: Wed Jul 24 20:00:43 2024 00:31:28.235 read: IOPS=314, BW=1257KiB/s (1287kB/s)(12.3MiB/10029msec) 00:31:28.235 slat (usec): min=8, max=111, avg=54.70, stdev=26.06 00:31:28.235 clat (msec): min=15, max=301, avg=50.43, stdev=58.40 00:31:28.235 lat (msec): min=15, max=301, avg=50.48, stdev=58.39 00:31:28.235 clat percentiles (msec): 00:31:28.235 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.235 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.235 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 243], 00:31:28.235 | 99.00th=[ 264], 99.50th=[ 268], 99.90th=[ 268], 99.95th=[ 300], 00:31:28.235 | 99.99th=[ 300] 00:31:28.235 bw ( KiB/s): min= 256, max= 1920, per=4.20%, avg=1254.40, stdev=791.08, samples=20 00:31:28.235 iops : min= 64, max= 480, avg=313.60, stdev=197.77, samples=20 00:31:28.235 lat (msec) : 20=0.29%, 50=91.15%, 100=0.44%, 250=4.44%, 500=3.68% 00:31:28.235 cpu : usr=97.88%, sys=1.33%, ctx=76, majf=0, minf=38 00:31:28.235 IO depths : 1=5.6%, 2=11.8%, 4=24.9%, 8=50.8%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:28.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 issued rwts: total=3152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.235 filename1: (groupid=0, jobs=1): err= 0: pid=1349756: Wed Jul 24 20:00:43 2024 00:31:28.235 read: IOPS=311, BW=1246KiB/s (1276kB/s)(12.2MiB/10019msec) 00:31:28.235 slat (usec): min=8, max=119, avg=34.43, stdev=10.85 00:31:28.235 clat (msec): min=25, max=388, avg=51.05, stdev=63.43 00:31:28.235 lat (msec): min=25, max=388, avg=51.09, stdev=63.43 00:31:28.235 clat percentiles (msec): 00:31:28.235 | 1.00th=[ 31], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.235 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.235 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 243], 00:31:28.235 | 99.00th=[ 347], 99.50th=[ 372], 99.90th=[ 388], 99.95th=[ 388], 00:31:28.235 | 99.99th=[ 388] 00:31:28.235 bw ( KiB/s): min= 176, max= 1920, per=4.16%, avg=1242.40, stdev=791.33, samples=20 00:31:28.235 iops : min= 44, max= 480, avg=310.60, stdev=197.83, samples=20 00:31:28.235 lat (msec) : 50=92.25%, 100=0.19%, 250=3.91%, 500=3.65% 00:31:28.235 cpu : usr=97.20%, sys=1.91%, ctx=123, majf=0, minf=32 00:31:28.235 IO depths : 1=5.5%, 2=11.5%, 4=24.0%, 8=52.0%, 16=7.0%, 32=0.0%, >=64=0.0% 00:31:28.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 issued rwts: total=3122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.235 filename1: (groupid=0, jobs=1): err= 0: pid=1349757: Wed Jul 24 20:00:43 2024 00:31:28.235 read: IOPS=311, BW=1248KiB/s (1278kB/s)(12.2MiB/10022msec) 00:31:28.235 slat (usec): min=8, max=101, avg=35.32, stdev=20.20 00:31:28.235 clat (msec): min=11, max=396, avg=50.98, stdev=63.17 00:31:28.235 lat (msec): min=11, max=396, avg=51.02, stdev=63.16 00:31:28.235 clat percentiles (msec): 00:31:28.235 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.235 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.235 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 247], 00:31:28.235 | 99.00th=[ 347], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:31:28.235 | 99.99th=[ 397] 00:31:28.235 bw ( KiB/s): min= 176, max= 1920, per=4.17%, avg=1244.00, stdev=794.39, samples=20 00:31:28.235 iops : min= 44, max= 480, avg=311.00, stdev=198.60, samples=20 00:31:28.235 lat (msec) : 20=0.13%, 50=91.36%, 100=1.15%, 250=2.82%, 500=4.54% 00:31:28.235 cpu : usr=98.06%, sys=1.53%, ctx=17, majf=0, minf=21 00:31:28.235 IO depths : 1=5.6%, 2=11.6%, 4=24.5%, 8=51.4%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:28.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 issued rwts: total=3126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.235 filename1: (groupid=0, jobs=1): err= 0: pid=1349758: Wed Jul 24 20:00:43 2024 00:31:28.235 read: IOPS=303, BW=1215KiB/s (1244kB/s)(11.9MiB/10008msec) 00:31:28.235 slat (usec): min=8, max=100, avg=36.09, stdev=13.60 00:31:28.235 clat (msec): min=29, max=396, avg=52.32, stdev=76.96 00:31:28.235 lat (msec): min=29, max=396, avg=52.36, stdev=76.96 00:31:28.235 clat percentiles (msec): 00:31:28.235 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.235 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.235 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 321], 00:31:28.235 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:31:28.235 | 99.99th=[ 397] 00:31:28.235 bw ( KiB/s): min= 128, max= 1920, per=3.93%, avg=1172.21, stdev=817.67, samples=19 00:31:28.235 iops : min= 32, max= 480, avg=293.05, stdev=204.42, samples=19 00:31:28.235 lat (msec) : 50=93.68%, 100=0.53%, 500=5.79% 00:31:28.235 cpu : usr=98.19%, sys=1.38%, ctx=11, majf=0, minf=29 00:31:28.235 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:28.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 issued rwts: total=3040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.235 filename1: (groupid=0, jobs=1): err= 0: pid=1349759: Wed Jul 24 20:00:43 2024 00:31:28.235 read: IOPS=310, BW=1244KiB/s (1273kB/s)(12.1MiB/10004msec) 00:31:28.235 slat (nsec): min=8102, max=91022, avg=29607.90, stdev=12691.35 00:31:28.235 clat (msec): min=24, max=373, avg=51.22, stdev=62.19 00:31:28.235 lat (msec): min=24, max=373, avg=51.25, stdev=62.19 00:31:28.235 clat percentiles (msec): 00:31:28.235 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.235 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.235 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 247], 00:31:28.235 | 99.00th=[ 321], 99.50th=[ 359], 99.90th=[ 376], 99.95th=[ 376], 00:31:28.235 | 99.99th=[ 376] 00:31:28.235 bw ( KiB/s): min= 256, max= 2048, per=4.05%, avg=1208.42, stdev=790.89, samples=19 00:31:28.235 iops : min= 64, max= 512, avg=302.11, stdev=197.72, samples=19 00:31:28.235 lat (msec) : 50=91.58%, 100=0.51%, 250=3.67%, 500=4.24% 00:31:28.235 cpu : usr=98.20%, sys=1.42%, ctx=13, majf=0, minf=28 00:31:28.235 IO depths : 1=5.7%, 2=11.6%, 4=24.0%, 8=51.9%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:28.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 issued rwts: total=3110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.235 filename1: (groupid=0, jobs=1): err= 0: pid=1349760: Wed Jul 24 20:00:43 2024 00:31:28.235 read: IOPS=305, BW=1221KiB/s (1251kB/s)(11.9MiB/10009msec) 00:31:28.235 slat (usec): min=8, max=109, avg=33.79, stdev=16.97 00:31:28.235 clat (msec): min=12, max=508, avg=52.06, stdev=76.22 00:31:28.235 lat (msec): min=12, max=508, avg=52.10, stdev=76.22 00:31:28.235 clat percentiles (msec): 00:31:28.235 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.235 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.235 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 321], 00:31:28.235 | 99.00th=[ 388], 99.50th=[ 418], 99.90th=[ 464], 99.95th=[ 510], 00:31:28.235 | 99.99th=[ 510] 00:31:28.235 bw ( KiB/s): min= 128, max= 2048, per=3.95%, avg=1178.95, stdev=824.67, samples=19 00:31:28.235 iops : min= 32, max= 512, avg=294.74, stdev=206.17, samples=19 00:31:28.235 lat (msec) : 20=0.52%, 50=92.67%, 100=1.05%, 500=5.69%, 750=0.07% 00:31:28.235 cpu : usr=98.34%, sys=1.26%, ctx=15, majf=0, minf=28 00:31:28.235 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:28.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.236 issued rwts: total=3056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.236 filename2: (groupid=0, jobs=1): err= 0: pid=1349761: Wed Jul 24 20:00:43 2024 00:31:28.236 read: IOPS=313, BW=1252KiB/s (1282kB/s)(12.2MiB/10019msec) 00:31:28.236 slat (nsec): min=8257, max=97555, avg=27003.86, stdev=14700.01 00:31:28.236 clat (msec): min=26, max=409, avg=50.90, stdev=59.60 00:31:28.236 lat (msec): min=26, max=409, avg=50.93, stdev=59.59 00:31:28.236 clat percentiles (msec): 00:31:28.236 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.236 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.236 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 243], 00:31:28.236 | 99.00th=[ 268], 99.50th=[ 321], 99.90th=[ 321], 99.95th=[ 409], 00:31:28.236 | 99.99th=[ 409] 00:31:28.236 bw ( KiB/s): min= 144, max= 1920, per=4.18%, avg=1248.00, stdev=783.75, samples=20 00:31:28.236 iops : min= 36, max= 480, avg=312.00, stdev=195.94, samples=20 00:31:28.236 lat (msec) : 50=91.84%, 250=4.72%, 500=3.44% 00:31:28.236 cpu : usr=98.03%, sys=1.51%, ctx=50, majf=0, minf=38 00:31:28.236 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:28.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.236 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.236 issued rwts: total=3136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.236 filename2: (groupid=0, jobs=1): err= 0: pid=1349762: Wed Jul 24 20:00:43 2024 00:31:28.236 read: IOPS=311, BW=1245KiB/s (1275kB/s)(12.2MiB/10022msec) 00:31:28.236 slat (usec): min=8, max=118, avg=53.09, stdev=25.01 00:31:28.236 clat (msec): min=19, max=456, avg=50.92, stdev=65.43 00:31:28.236 lat (msec): min=19, max=456, avg=50.97, stdev=65.43 00:31:28.236 clat percentiles (msec): 00:31:28.236 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.236 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.236 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 247], 00:31:28.236 | 99.00th=[ 359], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 456], 00:31:28.236 | 99.99th=[ 456] 00:31:28.236 bw ( KiB/s): min= 128, max= 1920, per=4.16%, avg=1241.60, stdev=797.85, samples=20 00:31:28.236 iops : min= 32, max= 480, avg=310.40, stdev=199.46, samples=20 00:31:28.236 lat (msec) : 20=0.06%, 50=92.31%, 100=0.45%, 250=2.56%, 500=4.62% 00:31:28.236 cpu : usr=96.77%, sys=2.15%, ctx=58, majf=0, minf=29 00:31:28.236 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:28.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.236 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.236 issued rwts: total=3120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.236 filename2: (groupid=0, jobs=1): err= 0: pid=1349763: Wed Jul 24 20:00:43 2024 00:31:28.236 read: IOPS=310, BW=1242KiB/s (1272kB/s)(12.1MiB/10015msec) 00:31:28.236 slat (usec): min=5, max=101, avg=37.15, stdev=16.53 00:31:28.236 clat (msec): min=22, max=488, avg=51.19, stdev=64.55 00:31:28.236 lat (msec): min=22, max=488, avg=51.22, stdev=64.54 00:31:28.236 clat percentiles (msec): 00:31:28.236 | 1.00th=[ 31], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.236 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.236 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 243], 00:31:28.236 | 99.00th=[ 359], 99.50th=[ 372], 99.90th=[ 451], 99.95th=[ 489], 00:31:28.236 | 99.99th=[ 489] 00:31:28.236 bw ( KiB/s): min= 144, max= 1920, per=4.14%, avg=1236.45, stdev=795.63, samples=20 00:31:28.236 iops : min= 36, max= 480, avg=309.10, stdev=198.90, samples=20 00:31:28.236 lat (msec) : 50=91.77%, 100=0.84%, 250=3.22%, 500=4.18% 00:31:28.236 cpu : usr=94.87%, sys=2.95%, ctx=211, majf=0, minf=26 00:31:28.236 IO depths : 1=5.9%, 2=12.0%, 4=24.7%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:31:28.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.236 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.236 issued rwts: total=3110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.236 filename2: (groupid=0, jobs=1): err= 0: pid=1349764: Wed Jul 24 20:00:43 2024 00:31:28.236 read: IOPS=318, BW=1276KiB/s (1306kB/s)(12.5MiB/10010msec) 00:31:28.236 slat (usec): min=8, max=110, avg=32.09, stdev=16.05 00:31:28.236 clat (msec): min=16, max=382, avg=49.89, stdev=61.67 00:31:28.236 lat (msec): min=16, max=382, avg=49.92, stdev=61.67 00:31:28.236 clat percentiles (msec): 00:31:28.236 | 1.00th=[ 22], 5.00th=[ 25], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.236 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.236 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 243], 00:31:28.236 | 99.00th=[ 321], 99.50th=[ 351], 99.90th=[ 384], 99.95th=[ 384], 00:31:28.236 | 99.99th=[ 384] 00:31:28.236 bw ( KiB/s): min= 144, max= 1920, per=4.26%, avg=1270.40, stdev=795.58, samples=20 00:31:28.236 iops : min= 36, max= 480, avg=317.60, stdev=198.90, samples=20 00:31:28.236 lat (msec) : 20=0.13%, 50=92.11%, 100=0.19%, 250=3.51%, 500=4.07% 00:31:28.236 cpu : usr=95.98%, sys=2.26%, ctx=239, majf=0, minf=35 00:31:28.236 IO depths : 1=5.0%, 2=10.5%, 4=22.6%, 8=54.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:31:28.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.236 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.236 issued rwts: total=3192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.236 filename2: (groupid=0, jobs=1): err= 0: pid=1349765: Wed Jul 24 20:00:43 2024 00:31:28.236 read: IOPS=311, BW=1246KiB/s (1276kB/s)(12.2MiB/10019msec) 00:31:28.236 slat (nsec): min=8049, max=95127, avg=28619.85, stdev=10797.59 00:31:28.236 clat (msec): min=29, max=369, avg=51.14, stdev=59.99 00:31:28.236 lat (msec): min=29, max=369, avg=51.17, stdev=59.99 00:31:28.236 clat percentiles (msec): 00:31:28.236 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.236 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.236 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 241], 00:31:28.236 | 99.00th=[ 268], 99.50th=[ 305], 99.90th=[ 372], 99.95th=[ 372], 00:31:28.236 | 99.99th=[ 372] 00:31:28.236 bw ( KiB/s): min= 256, max= 1920, per=4.16%, avg=1241.60, stdev=781.76, samples=20 00:31:28.236 iops : min= 64, max= 480, avg=310.40, stdev=195.44, samples=20 00:31:28.236 lat (msec) : 50=91.28%, 100=0.51%, 250=5.00%, 500=3.21% 00:31:28.236 cpu : usr=98.13%, sys=1.47%, ctx=14, majf=0, minf=38 00:31:28.236 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:28.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.236 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.236 issued rwts: total=3120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.236 filename2: (groupid=0, jobs=1): err= 0: pid=1349766: Wed Jul 24 20:00:43 2024 00:31:28.236 read: IOPS=303, BW=1215KiB/s (1244kB/s)(11.9MiB/10012msec) 00:31:28.236 slat (nsec): min=4272, max=67058, avg=32697.29, stdev=10183.84 00:31:28.236 clat (msec): min=27, max=505, avg=52.41, stdev=77.38 00:31:28.236 lat (msec): min=27, max=505, avg=52.44, stdev=77.38 00:31:28.236 clat percentiles (msec): 00:31:28.236 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.236 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.236 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 321], 00:31:28.236 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 502], 99.95th=[ 506], 00:31:28.236 | 99.99th=[ 506] 00:31:28.236 bw ( KiB/s): min= 128, max= 1920, per=4.05%, avg=1208.95, stdev=818.94, samples=20 00:31:28.236 iops : min= 32, max= 480, avg=302.20, stdev=204.71, samples=20 00:31:28.236 lat (msec) : 50=93.68%, 100=0.53%, 250=0.13%, 500=5.53%, 750=0.13% 00:31:28.236 cpu : usr=97.57%, sys=1.98%, ctx=28, majf=0, minf=27 00:31:28.236 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:28.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.236 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.236 issued rwts: total=3040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.236 filename2: (groupid=0, jobs=1): err= 0: pid=1349767: Wed Jul 24 20:00:43 2024 00:31:28.236 read: IOPS=319, BW=1277KiB/s (1307kB/s)(12.5MiB/10014msec) 00:31:28.236 slat (usec): min=6, max=127, avg=28.27, stdev=22.07 00:31:28.236 clat (msec): min=4, max=321, avg=49.89, stdev=59.13 00:31:28.236 lat (msec): min=4, max=321, avg=49.91, stdev=59.12 00:31:28.236 clat percentiles (msec): 00:31:28.236 | 1.00th=[ 14], 5.00th=[ 28], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.236 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.236 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 243], 00:31:28.236 | 99.00th=[ 268], 99.50th=[ 317], 99.90th=[ 321], 99.95th=[ 321], 00:31:28.236 | 99.99th=[ 321] 00:31:28.236 bw ( KiB/s): min= 256, max= 2224, per=4.28%, avg=1277.60, stdev=807.85, samples=20 00:31:28.236 iops : min= 64, max= 556, avg=319.40, stdev=201.96, samples=20 00:31:28.236 lat (msec) : 10=0.25%, 20=1.75%, 50=89.92%, 100=0.06%, 250=4.51% 00:31:28.236 lat (msec) : 500=3.50% 00:31:28.236 cpu : usr=97.15%, sys=1.94%, ctx=48, majf=0, minf=30 00:31:28.236 IO depths : 1=0.8%, 2=6.7%, 4=23.9%, 8=56.9%, 16=11.8%, 32=0.0%, >=64=0.0% 00:31:28.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.236 complete : 0=0.0%, 4=94.1%, 8=0.4%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.236 issued rwts: total=3196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.236 filename2: (groupid=0, jobs=1): err= 0: pid=1349768: Wed Jul 24 20:00:43 2024 00:31:28.236 read: IOPS=309, BW=1236KiB/s (1266kB/s)(12.1MiB/10009msec) 00:31:28.236 slat (usec): min=6, max=112, avg=36.36, stdev=13.67 00:31:28.236 clat (msec): min=27, max=459, avg=51.42, stdev=65.75 00:31:28.236 lat (msec): min=27, max=459, avg=51.46, stdev=65.74 00:31:28.236 clat percentiles (msec): 00:31:28.237 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:31:28.237 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:31:28.237 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 243], 00:31:28.237 | 99.00th=[ 372], 99.50th=[ 376], 99.90th=[ 380], 99.95th=[ 460], 00:31:28.237 | 99.99th=[ 460] 00:31:28.237 bw ( KiB/s): min= 176, max= 1920, per=4.12%, avg=1231.20, stdev=789.31, samples=20 00:31:28.237 iops : min= 44, max= 480, avg=307.80, stdev=197.33, samples=20 00:31:28.237 lat (msec) : 50=92.05%, 100=0.52%, 250=3.30%, 500=4.14% 00:31:28.237 cpu : usr=98.11%, sys=1.49%, ctx=16, majf=0, minf=28 00:31:28.237 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:28.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.237 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.237 issued rwts: total=3094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:28.237 00:31:28.237 Run status group 0 (all jobs): 00:31:28.237 READ: bw=29.1MiB/s (30.6MB/s), 1215KiB/s-1277KiB/s (1244kB/s-1307kB/s), io=292MiB (306MB), run=10004-10029msec 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:28.237 20:00:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.237 bdev_null0 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.237 [2024-07-24 20:00:44.059740] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.237 bdev_null1 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@536 -- # config=() 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@536 -- # local subsystem config 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:31:28.237 { 00:31:28.237 "params": { 00:31:28.237 "name": "Nvme$subsystem", 00:31:28.237 "trtype": "$TEST_TRANSPORT", 00:31:28.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.237 "adrfam": "ipv4", 00:31:28.237 "trsvcid": "$NVMF_PORT", 00:31:28.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.237 "hdgst": ${hdgst:-false}, 00:31:28.237 "ddgst": ${ddgst:-false} 00:31:28.237 }, 00:31:28.237 "method": "bdev_nvme_attach_controller" 00:31:28.237 } 00:31:28.237 EOF 00:31:28.237 )") 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local fio_dir=/usr/src/fio 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:28.237 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local sanitizers 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # shift 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local asan_lib= 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # cat 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # grep libasan 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:31:28.238 { 00:31:28.238 "params": { 00:31:28.238 "name": "Nvme$subsystem", 00:31:28.238 "trtype": "$TEST_TRANSPORT", 00:31:28.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.238 "adrfam": "ipv4", 00:31:28.238 "trsvcid": "$NVMF_PORT", 00:31:28.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.238 "hdgst": ${hdgst:-false}, 00:31:28.238 "ddgst": ${ddgst:-false} 00:31:28.238 }, 00:31:28.238 "method": "bdev_nvme_attach_controller" 00:31:28.238 } 00:31:28.238 EOF 00:31:28.238 )") 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # cat 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # jq . 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@561 -- # IFS=, 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:31:28.238 "params": { 00:31:28.238 "name": "Nvme0", 00:31:28.238 "trtype": "tcp", 00:31:28.238 "traddr": "10.0.0.2", 00:31:28.238 "adrfam": "ipv4", 00:31:28.238 "trsvcid": "4420", 00:31:28.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:28.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:28.238 "hdgst": false, 00:31:28.238 "ddgst": false 00:31:28.238 }, 00:31:28.238 "method": "bdev_nvme_attach_controller" 00:31:28.238 },{ 00:31:28.238 "params": { 00:31:28.238 "name": "Nvme1", 00:31:28.238 "trtype": "tcp", 00:31:28.238 "traddr": "10.0.0.2", 00:31:28.238 "adrfam": "ipv4", 00:31:28.238 "trsvcid": "4420", 00:31:28.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:28.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:28.238 "hdgst": false, 00:31:28.238 "ddgst": false 00:31:28.238 }, 00:31:28.238 "method": "bdev_nvme_attach_controller" 00:31:28.238 }' 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # asan_lib= 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # grep libclang_rt.asan 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # asan_lib= 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:28.238 20:00:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:28.238 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:28.238 ... 00:31:28.238 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:28.238 ... 00:31:28.238 fio-3.35 00:31:28.238 Starting 4 threads 00:31:28.238 EAL: No free 2048 kB hugepages reported on node 1 00:31:33.497 00:31:33.497 filename0: (groupid=0, jobs=1): err= 0: pid=1351158: Wed Jul 24 20:00:50 2024 00:31:33.497 read: IOPS=1828, BW=14.3MiB/s (15.0MB/s)(71.5MiB/5001msec) 00:31:33.497 slat (nsec): min=6748, max=72546, avg=17417.11, stdev=9375.73 00:31:33.497 clat (usec): min=865, max=8293, avg=4312.55, stdev=554.45 00:31:33.497 lat (usec): min=879, max=8308, avg=4329.96, stdev=555.20 00:31:33.497 clat percentiles (usec): 00:31:33.497 | 1.00th=[ 2638], 5.00th=[ 3556], 10.00th=[ 3916], 20.00th=[ 4080], 00:31:33.497 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:31:33.497 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4883], 00:31:33.497 | 99.00th=[ 6652], 99.50th=[ 6915], 99.90th=[ 7570], 99.95th=[ 7635], 00:31:33.497 | 99.99th=[ 8291] 00:31:33.497 bw ( KiB/s): min=13899, max=15424, per=25.01%, avg=14623.50, stdev=608.15, samples=10 00:31:33.497 iops : min= 1737, max= 1928, avg=1827.90, stdev=76.07, samples=10 00:31:33.497 lat (usec) : 1000=0.02% 00:31:33.497 lat (msec) : 2=0.46%, 4=12.92%, 10=86.60% 00:31:33.497 cpu : usr=93.24%, sys=5.98%, ctx=11, majf=0, minf=52 00:31:33.497 IO depths : 1=0.1%, 2=15.9%, 4=57.1%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.497 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.497 issued rwts: total=9146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.497 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:33.497 filename0: (groupid=0, jobs=1): err= 0: pid=1351159: Wed Jul 24 20:00:50 2024 00:31:33.497 read: IOPS=1819, BW=14.2MiB/s (14.9MB/s)(71.1MiB/5001msec) 00:31:33.497 slat (nsec): min=6598, max=73574, avg=17453.97, stdev=9436.27 00:31:33.497 clat (usec): min=774, max=8232, avg=4330.16, stdev=578.90 00:31:33.497 lat (usec): min=789, max=8246, avg=4347.61, stdev=579.69 00:31:33.497 clat percentiles (usec): 00:31:33.497 | 1.00th=[ 2180], 5.00th=[ 3752], 10.00th=[ 3982], 20.00th=[ 4113], 00:31:33.497 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:31:33.497 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4883], 00:31:33.497 | 99.00th=[ 6849], 99.50th=[ 7308], 99.90th=[ 7832], 99.95th=[ 7898], 00:31:33.497 | 99.99th=[ 8225] 00:31:33.497 bw ( KiB/s): min=13952, max=15232, per=25.02%, avg=14629.33, stdev=426.86, samples=9 00:31:33.497 iops : min= 1744, max= 1904, avg=1828.67, stdev=53.36, samples=9 00:31:33.497 lat (usec) : 1000=0.09% 00:31:33.497 lat (msec) : 2=0.73%, 4=10.24%, 10=88.94% 00:31:33.497 cpu : usr=93.36%, sys=5.86%, ctx=14, majf=0, minf=33 00:31:33.497 IO depths : 1=0.1%, 2=17.4%, 4=56.2%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.497 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.497 issued rwts: total=9099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.497 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:33.497 filename1: (groupid=0, jobs=1): err= 0: pid=1351160: Wed Jul 24 20:00:50 2024 00:31:33.497 read: IOPS=1820, BW=14.2MiB/s (14.9MB/s)(71.1MiB/5001msec) 00:31:33.497 slat (nsec): min=6607, max=74023, avg=17453.97, stdev=9283.67 00:31:33.497 clat (usec): min=799, max=8014, avg=4328.65, stdev=575.64 00:31:33.497 lat (usec): min=812, max=8028, avg=4346.11, stdev=576.38 00:31:33.497 clat percentiles (usec): 00:31:33.497 | 1.00th=[ 2212], 5.00th=[ 3720], 10.00th=[ 3982], 20.00th=[ 4113], 00:31:33.497 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:31:33.497 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4883], 00:31:33.497 | 99.00th=[ 6718], 99.50th=[ 7111], 99.90th=[ 7635], 99.95th=[ 7767], 00:31:33.497 | 99.99th=[ 8029] 00:31:33.497 bw ( KiB/s): min=13952, max=15104, per=25.03%, avg=14634.22, stdev=385.31, samples=9 00:31:33.497 iops : min= 1744, max= 1888, avg=1829.22, stdev=48.23, samples=9 00:31:33.497 lat (usec) : 1000=0.09% 00:31:33.497 lat (msec) : 2=0.71%, 4=10.65%, 10=88.54% 00:31:33.497 cpu : usr=93.12%, sys=6.14%, ctx=13, majf=0, minf=36 00:31:33.497 IO depths : 1=0.1%, 2=16.8%, 4=56.4%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.497 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.497 issued rwts: total=9105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.497 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:33.497 filename1: (groupid=0, jobs=1): err= 0: pid=1351161: Wed Jul 24 20:00:50 2024 00:31:33.497 read: IOPS=1839, BW=14.4MiB/s (15.1MB/s)(71.9MiB/5002msec) 00:31:33.497 slat (nsec): min=6495, max=90357, avg=17774.81, stdev=8549.83 00:31:33.497 clat (usec): min=1054, max=8247, avg=4289.89, stdev=465.42 00:31:33.497 lat (usec): min=1075, max=8284, avg=4307.66, stdev=466.09 00:31:33.497 clat percentiles (usec): 00:31:33.497 | 1.00th=[ 2966], 5.00th=[ 3589], 10.00th=[ 3884], 20.00th=[ 4080], 00:31:33.497 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:31:33.497 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4621], 95.00th=[ 4752], 00:31:33.497 | 99.00th=[ 6063], 99.50th=[ 6587], 99.90th=[ 7898], 99.95th=[ 7963], 00:31:33.497 | 99.99th=[ 8225] 00:31:33.497 bw ( KiB/s): min=13952, max=15296, per=25.16%, avg=14710.40, stdev=513.68, samples=10 00:31:33.497 iops : min= 1744, max= 1912, avg=1838.80, stdev=64.21, samples=10 00:31:33.497 lat (msec) : 2=0.23%, 4=13.12%, 10=86.66% 00:31:33.497 cpu : usr=92.84%, sys=6.26%, ctx=120, majf=0, minf=73 00:31:33.497 IO depths : 1=0.2%, 2=11.5%, 4=61.2%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:33.497 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.497 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:33.497 issued rwts: total=9202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:33.497 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:33.497 00:31:33.497 Run status group 0 (all jobs): 00:31:33.498 READ: bw=57.1MiB/s (59.9MB/s), 14.2MiB/s-14.4MiB/s (14.9MB/s-15.1MB/s), io=286MiB (299MB), run=5001-5002msec 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:33.498 00:31:33.498 real 0m24.112s 00:31:33.498 user 4m31.560s 00:31:33.498 sys 0m7.390s 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # xtrace_disable 00:31:33.498 20:00:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:33.498 ************************************ 00:31:33.498 END TEST fio_dif_rand_params 00:31:33.498 ************************************ 00:31:33.498 20:00:50 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:33.498 20:00:50 nvmf_dif -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:31:33.498 20:00:50 nvmf_dif -- common/autotest_common.sh@1108 -- # xtrace_disable 00:31:33.498 20:00:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:33.498 ************************************ 00:31:33.498 START TEST fio_dif_digest 00:31:33.498 ************************************ 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # fio_dif_digest 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:33.498 bdev_null0 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:33.498 [2024-07-24 20:00:50.434055] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@536 -- # config=() 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@536 -- # local subsystem config 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@538 -- # for subsystem in "${@:-1}" 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # config+=("$(cat <<-EOF 00:31:33.498 { 00:31:33.498 "params": { 00:31:33.498 "name": "Nvme$subsystem", 00:31:33.498 "trtype": "$TEST_TRANSPORT", 00:31:33.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.498 "adrfam": "ipv4", 00:31:33.498 "trsvcid": "$NVMF_PORT", 00:31:33.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.498 "hdgst": ${hdgst:-false}, 00:31:33.498 "ddgst": ${ddgst:-false} 00:31:33.498 }, 00:31:33.498 "method": "bdev_nvme_attach_controller" 00:31:33.498 } 00:31:33.498 EOF 00:31:33.498 )") 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local fio_dir=/usr/src/fio 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local sanitizers 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # shift 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local asan_lib= 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # cat 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # grep libasan 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # jq . 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@561 -- # IFS=, 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # printf '%s\n' '{ 00:31:33.498 "params": { 00:31:33.498 "name": "Nvme0", 00:31:33.498 "trtype": "tcp", 00:31:33.498 "traddr": "10.0.0.2", 00:31:33.498 "adrfam": "ipv4", 00:31:33.498 "trsvcid": "4420", 00:31:33.498 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:33.498 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:33.498 "hdgst": true, 00:31:33.498 "ddgst": true 00:31:33.498 }, 00:31:33.498 "method": "bdev_nvme_attach_controller" 00:31:33.498 }' 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # asan_lib= 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # for sanitizer in "${sanitizers[@]}" 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # grep libclang_rt.asan 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # awk '{print $3}' 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # asan_lib= 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # [[ -n '' ]] 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1353 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:33.498 20:00:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1353 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:33.498 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:33.498 ... 00:31:33.498 fio-3.35 00:31:33.498 Starting 3 threads 00:31:33.498 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.699 00:31:45.699 filename0: (groupid=0, jobs=1): err= 0: pid=1351915: Wed Jul 24 20:01:01 2024 00:31:45.699 read: IOPS=200, BW=25.0MiB/s (26.2MB/s)(251MiB/10047msec) 00:31:45.699 slat (nsec): min=5082, max=68820, avg=18798.26, stdev=6127.81 00:31:45.699 clat (usec): min=9110, max=54071, avg=14951.76, stdev=1718.39 00:31:45.699 lat (usec): min=9131, max=54086, avg=14970.56, stdev=1718.50 00:31:45.699 clat percentiles (usec): 00:31:45.699 | 1.00th=[10290], 5.00th=[13042], 10.00th=[13435], 20.00th=[14091], 00:31:45.699 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:31:45.699 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16319], 95.00th=[16712], 00:31:45.699 | 99.00th=[17957], 99.50th=[18220], 99.90th=[25035], 99.95th=[47973], 00:31:45.699 | 99.99th=[54264] 00:31:45.699 bw ( KiB/s): min=24064, max=27136, per=34.81%, avg=25689.60, stdev=844.06, samples=20 00:31:45.699 iops : min= 188, max= 212, avg=200.70, stdev= 6.59, samples=20 00:31:45.699 lat (msec) : 10=0.80%, 20=98.96%, 50=0.20%, 100=0.05% 00:31:45.699 cpu : usr=92.50%, sys=7.00%, ctx=25, majf=0, minf=98 00:31:45.699 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.699 issued rwts: total=2010,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.699 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:45.699 filename0: (groupid=0, jobs=1): err= 0: pid=1351916: Wed Jul 24 20:01:01 2024 00:31:45.699 read: IOPS=188, BW=23.5MiB/s (24.7MB/s)(236MiB/10046msec) 00:31:45.699 slat (nsec): min=4797, max=93754, avg=15595.36, stdev=4888.88 00:31:45.699 clat (usec): min=9963, max=52937, avg=15902.16, stdev=1752.37 00:31:45.699 lat (usec): min=9982, max=52957, avg=15917.76, stdev=1752.30 00:31:45.699 clat percentiles (usec): 00:31:45.699 | 1.00th=[11731], 5.00th=[13960], 10.00th=[14353], 20.00th=[14877], 00:31:45.699 | 30.00th=[15270], 40.00th=[15664], 50.00th=[15926], 60.00th=[16188], 00:31:45.699 | 70.00th=[16450], 80.00th=[16909], 90.00th=[17433], 95.00th=[17957], 00:31:45.699 | 99.00th=[18744], 99.50th=[19268], 99.90th=[49546], 99.95th=[52691], 00:31:45.699 | 99.99th=[52691] 00:31:45.699 bw ( KiB/s): min=23040, max=25600, per=32.74%, avg=24166.40, stdev=819.71, samples=20 00:31:45.699 iops : min= 180, max= 200, avg=188.80, stdev= 6.40, samples=20 00:31:45.699 lat (msec) : 10=0.05%, 20=99.74%, 50=0.16%, 100=0.05% 00:31:45.699 cpu : usr=92.40%, sys=7.13%, ctx=31, majf=0, minf=178 00:31:45.699 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.699 issued rwts: total=1890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.699 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:45.699 filename0: (groupid=0, jobs=1): err= 0: pid=1351917: Wed Jul 24 20:01:01 2024 00:31:45.699 read: IOPS=189, BW=23.6MiB/s (24.8MB/s)(237MiB/10007msec) 00:31:45.699 slat (nsec): min=7249, max=45902, avg=15845.99, stdev=4475.85 00:31:45.699 clat (usec): min=7823, max=57960, avg=15839.05, stdev=3471.50 00:31:45.699 lat (usec): min=7841, max=57979, avg=15854.89, stdev=3471.59 00:31:45.699 clat percentiles (usec): 00:31:45.699 | 1.00th=[13042], 5.00th=[13698], 10.00th=[14091], 20.00th=[14615], 00:31:45.699 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15533], 60.00th=[15926], 00:31:45.699 | 70.00th=[16188], 80.00th=[16581], 90.00th=[17171], 95.00th=[17695], 00:31:45.699 | 99.00th=[18744], 99.50th=[56361], 99.90th=[57410], 99.95th=[57934], 00:31:45.699 | 99.99th=[57934] 00:31:45.699 bw ( KiB/s): min=21504, max=26112, per=32.78%, avg=24194.45, stdev=1413.94, samples=20 00:31:45.699 iops : min= 168, max= 204, avg=189.00, stdev=11.04, samples=20 00:31:45.699 lat (msec) : 10=0.11%, 20=99.26%, 100=0.63% 00:31:45.699 cpu : usr=92.09%, sys=7.45%, ctx=23, majf=0, minf=138 00:31:45.699 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:45.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.699 issued rwts: total=1893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.699 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:45.699 00:31:45.699 Run status group 0 (all jobs): 00:31:45.699 READ: bw=72.1MiB/s (75.6MB/s), 23.5MiB/s-25.0MiB/s (24.7MB/s-26.2MB/s), io=724MiB (759MB), run=10007-10047msec 00:31:45.699 20:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:45.699 20:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:45.699 20:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:45.699 20:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:45.699 20:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:45.699 20:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:45.699 20:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:45.699 20:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:45.699 20:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:45.699 20:01:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:45.699 20:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:45.699 20:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:45.699 20:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:45.699 00:31:45.699 real 0m11.090s 00:31:45.699 user 0m28.954s 00:31:45.699 sys 0m2.453s 00:31:45.699 20:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # xtrace_disable 00:31:45.699 20:01:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:45.699 ************************************ 00:31:45.699 END TEST fio_dif_digest 00:31:45.699 ************************************ 00:31:45.699 20:01:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:45.699 20:01:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:45.699 20:01:01 nvmf_dif -- nvmf/common.sh@492 -- # nvmfcleanup 00:31:45.699 20:01:01 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:31:45.699 20:01:01 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:45.699 20:01:01 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:31:45.699 20:01:01 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:45.699 20:01:01 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:45.699 rmmod nvme_tcp 00:31:45.699 rmmod nvme_fabrics 00:31:45.699 rmmod nvme_keyring 00:31:45.699 20:01:01 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:45.699 20:01:01 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:31:45.699 20:01:01 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:31:45.699 20:01:01 nvmf_dif -- nvmf/common.sh@493 -- # '[' -n 1345353 ']' 00:31:45.699 20:01:01 nvmf_dif -- nvmf/common.sh@494 -- # killprocess 1345353 00:31:45.699 20:01:01 nvmf_dif -- common/autotest_common.sh@951 -- # '[' -z 1345353 ']' 00:31:45.699 20:01:01 nvmf_dif -- common/autotest_common.sh@955 -- # kill -0 1345353 00:31:45.699 20:01:01 nvmf_dif -- common/autotest_common.sh@956 -- # uname 00:31:45.699 20:01:01 nvmf_dif -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:31:45.699 20:01:01 nvmf_dif -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1345353 00:31:45.699 20:01:01 nvmf_dif -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:31:45.699 20:01:01 nvmf_dif -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:31:45.699 20:01:01 nvmf_dif -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1345353' 00:31:45.699 killing process with pid 1345353 00:31:45.700 20:01:01 nvmf_dif -- common/autotest_common.sh@970 -- # kill 1345353 00:31:45.700 20:01:01 nvmf_dif -- common/autotest_common.sh@975 -- # wait 1345353 00:31:45.700 20:01:01 nvmf_dif -- nvmf/common.sh@496 -- # '[' iso == iso ']' 00:31:45.700 20:01:01 nvmf_dif -- nvmf/common.sh@497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:45.700 Waiting for block devices as requested 00:31:45.700 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:45.956 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:45.956 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:45.956 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:45.956 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:46.213 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:46.213 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:46.213 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:46.213 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:46.469 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:46.469 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:46.469 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:46.469 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:46.726 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:46.727 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:46.727 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:46.727 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:46.985 20:01:04 nvmf_dif -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:31:46.985 20:01:04 nvmf_dif -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:31:46.985 20:01:04 nvmf_dif -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:46.985 20:01:04 nvmf_dif -- nvmf/common.sh@282 -- # remove_spdk_ns 00:31:46.985 20:01:04 nvmf_dif -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.985 20:01:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:46.985 20:01:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.885 20:01:06 nvmf_dif -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:31:48.885 00:31:48.885 real 1m6.526s 00:31:48.885 user 6m27.950s 00:31:48.885 sys 0m19.078s 00:31:48.885 20:01:06 nvmf_dif -- common/autotest_common.sh@1127 -- # xtrace_disable 00:31:48.885 20:01:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:48.885 ************************************ 00:31:48.885 END TEST nvmf_dif 00:31:48.885 ************************************ 00:31:48.885 20:01:06 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:48.885 20:01:06 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:31:48.885 20:01:06 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:31:48.885 20:01:06 -- common/autotest_common.sh@10 -- # set +x 00:31:49.143 ************************************ 00:31:49.143 START TEST nvmf_abort_qd_sizes 00:31:49.143 ************************************ 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:49.143 * Looking for test storage... 00:31:49.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:49.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # '[' -z tcp ']' 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@452 -- # prepare_net_devs 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # local -g is_hw=no 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # remove_spdk_ns 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:49.143 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ phy != virt ]] 00:31:49.144 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # gather_supported_nvmf_pci_devs 00:31:49.144 20:01:06 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # xtrace_disable 00:31:49.144 20:01:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # pci_devs=() 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -a pci_devs 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # pci_net_devs=() 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -a pci_net_devs 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # pci_drivers=() 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -A pci_drivers 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # net_devs=() 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # local -ga net_devs 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # e810=() 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # local -ga e810 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # x722=() 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # local -ga x722 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # mlx=() 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # local -ga mlx 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@305 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:51.074 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@324 -- # pci_devs+=("${e810[@]}") 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # [[ tcp == rdma ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # [[ e810 == mlx5 ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@333 -- # [[ e810 == e810 ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # pci_devs=("${e810[@]}") 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # (( 2 == 0 )) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:51.075 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # for pci in "${pci_devs[@]}" 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@345 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:51.075 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unknown ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ ice == unbound ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@354 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # [[ tcp == rdma ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@370 -- # (( 0 > 0 )) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ e810 == e810 ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ tcp == rdma ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # [[ up == up ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:51.075 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@386 -- # for pci in "${pci_devs[@]}" 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@387 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # [[ tcp == tcp ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@393 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # [[ up == up ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # (( 1 == 0 )) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@403 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:51.075 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@405 -- # net_devs+=("${pci_net_devs[@]}") 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@408 -- # (( 2 == 0 )) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # is_hw=yes 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@420 -- # [[ yes == yes ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@421 -- # [[ tcp == tcp ]] 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # nvmf_tcp_init 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 2 > 1 )) 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # NVMF_SECOND_TARGET_IP= 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@247 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip -4 addr flush cvl_0_0 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@249 -- # ip -4 addr flush cvl_0_1 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # ip netns add cvl_0_0_ns_spdk 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # ip link set cvl_0_1 up 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ping -c 1 10.0.0.2 00:31:51.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:51.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:31:51.075 00:31:51.075 --- 10.0.0.2 ping statistics --- 00:31:51.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.075 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@272 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:51.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:51.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:31:51.075 00:31:51.075 --- 10.0.0.1 ping statistics --- 00:31:51.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:51.075 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # return 0 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # '[' iso == iso ']' 00:31:51.075 20:01:08 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:52.449 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:52.449 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:52.449 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:52.449 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:52.450 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:52.450 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:52.450 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:52.450 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:52.450 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:52.450 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:52.450 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:52.450 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:52.450 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:52.450 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:52.450 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:52.450 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:53.385 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@458 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == \r\d\m\a ]] 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # [[ tcp == \t\c\p ]] 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # '[' tcp == tcp ']' 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # modprobe nvme-tcp 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_enter start_nvmf_tgt 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@725 -- # xtrace_disable 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@485 -- # nvmfpid=1356711 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- nvmf/common.sh@486 -- # waitforlisten 1356711 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # '[' -z 1356711 ']' 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local max_retries=100 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:53.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@841 -- # xtrace_disable 00:31:53.385 20:01:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:53.385 [2024-07-24 20:01:10.716286] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:31:53.385 [2024-07-24 20:01:10.716391] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:53.385 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.643 [2024-07-24 20:01:10.785937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:53.643 [2024-07-24 20:01:10.904437] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:53.643 [2024-07-24 20:01:10.904497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:53.643 [2024-07-24 20:01:10.904513] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:53.643 [2024-07-24 20:01:10.904526] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:53.643 [2024-07-24 20:01:10.904537] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:53.643 [2024-07-24 20:01:10.904632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:53.643 [2024-07-24 20:01:10.904702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:53.643 [2024-07-24 20:01:10.904794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:53.643 [2024-07-24 20:01:10.904797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@865 -- # return 0 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- nvmf/common.sh@487 -- # timing_exit start_nvmf_tgt 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@731 -- # xtrace_disable 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1108 -- # xtrace_disable 00:31:54.573 20:01:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:54.573 ************************************ 00:31:54.573 START TEST spdk_target_abort 00:31:54.573 ************************************ 00:31:54.573 20:01:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # spdk_target 00:31:54.573 20:01:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:54.573 20:01:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:31:54.573 20:01:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:54.573 20:01:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:57.857 spdk_targetn1 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:57.857 [2024-07-24 20:01:14.532963] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:57.857 [2024-07-24 20:01:14.565250] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:57.857 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:57.858 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:57.858 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:57.858 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:57.858 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:57.858 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:57.858 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:57.858 20:01:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:57.858 EAL: No free 2048 kB hugepages reported on node 1 00:32:00.392 Initializing NVMe Controllers 00:32:00.392 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:00.392 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:00.392 Initialization complete. Launching workers. 00:32:00.392 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11528, failed: 0 00:32:00.392 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1204, failed to submit 10324 00:32:00.392 success 720, unsuccess 484, failed 0 00:32:00.392 20:01:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:00.392 20:01:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:00.392 EAL: No free 2048 kB hugepages reported on node 1 00:32:03.666 Initializing NVMe Controllers 00:32:03.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:03.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:03.666 Initialization complete. Launching workers. 00:32:03.666 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8799, failed: 0 00:32:03.666 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1210, failed to submit 7589 00:32:03.666 success 311, unsuccess 899, failed 0 00:32:03.666 20:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:03.666 20:01:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:03.666 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.937 Initializing NVMe Controllers 00:32:06.937 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:06.937 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:06.937 Initialization complete. Launching workers. 00:32:06.937 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30212, failed: 0 00:32:06.937 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2640, failed to submit 27572 00:32:06.937 success 499, unsuccess 2141, failed 0 00:32:06.937 20:01:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:06.937 20:01:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:32:06.937 20:01:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:06.937 20:01:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:32:06.937 20:01:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:06.937 20:01:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@562 -- # xtrace_disable 00:32:06.937 20:01:24 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:08.308 20:01:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:32:08.308 20:01:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1356711 00:32:08.308 20:01:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' -z 1356711 ']' 00:32:08.308 20:01:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # kill -0 1356711 00:32:08.308 20:01:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # uname 00:32:08.308 20:01:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:32:08.308 20:01:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1356711 00:32:08.308 20:01:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:32:08.308 20:01:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:32:08.308 20:01:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1356711' 00:32:08.308 killing process with pid 1356711 00:32:08.308 20:01:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # kill 1356711 00:32:08.308 20:01:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@975 -- # wait 1356711 00:32:08.566 00:32:08.566 real 0m14.105s 00:32:08.566 user 0m55.441s 00:32:08.566 sys 0m2.681s 00:32:08.566 20:01:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # xtrace_disable 00:32:08.566 20:01:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:08.566 ************************************ 00:32:08.566 END TEST spdk_target_abort 00:32:08.566 ************************************ 00:32:08.566 20:01:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:08.567 20:01:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:32:08.567 20:01:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1108 -- # xtrace_disable 00:32:08.567 20:01:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:08.567 ************************************ 00:32:08.567 START TEST kernel_target_abort 00:32:08.567 ************************************ 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # kernel_target 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # local ip 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@746 -- # ip_candidates=() 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@746 -- # local -A ip_candidates 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@749 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@751 -- # [[ -z tcp ]] 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@751 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@752 -- # ip=NVMF_INITIATOR_IP 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@754 -- # [[ -z 10.0.0.1 ]] 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@759 -- # echo 10.0.0.1 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@638 -- # nvmet=/sys/kernel/config/nvmet 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@640 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@643 -- # local block nvme 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ ! -e /sys/module/nvmet ]] 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@646 -- # modprobe nvmet 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@649 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:08.567 20:01:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:09.939 Waiting for block devices as requested 00:32:09.939 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:09.939 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:09.939 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:09.939 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:10.196 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:10.196 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:10.196 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:10.196 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:10.455 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:10.455 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:10.455 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:10.455 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:10.712 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:10.712 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:10.712 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:10.712 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:10.970 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:10.970 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@654 -- # for block in /sys/block/nvme* 00:32:10.970 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@655 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:10.970 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # is_block_zoned nvme0n1 00:32:10.970 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # local device=nvme0n1 00:32:10.970 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:10.970 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1666 -- # [[ none != none ]] 00:32:10.970 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@657 -- # block_in_use nvme0n1 00:32:10.970 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:10.970 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:10.970 No valid GPT data, bailing 00:32:10.970 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:10.970 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:10.970 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:10.970 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@657 -- # nvme=/dev/nvme0n1 00:32:10.970 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # [[ -b /dev/nvme0n1 ]] 00:32:10.970 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:10.971 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:10.971 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:10.971 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:10.971 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 1 00:32:10.971 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo /dev/nvme0n1 00:32:10.971 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 1 00:32:10.971 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # echo 10.0.0.1 00:32:10.971 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # echo tcp 00:32:10.971 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # echo 4420 00:32:10.971 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # echo ipv4 00:32:10.971 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:11.229 00:32:11.229 Discovery Log Number of Records 2, Generation counter 2 00:32:11.229 =====Discovery Log Entry 0====== 00:32:11.229 trtype: tcp 00:32:11.229 adrfam: ipv4 00:32:11.229 subtype: current discovery subsystem 00:32:11.229 treq: not specified, sq flow control disable supported 00:32:11.229 portid: 1 00:32:11.229 trsvcid: 4420 00:32:11.229 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:11.229 traddr: 10.0.0.1 00:32:11.229 eflags: none 00:32:11.229 sectype: none 00:32:11.229 =====Discovery Log Entry 1====== 00:32:11.229 trtype: tcp 00:32:11.229 adrfam: ipv4 00:32:11.229 subtype: nvme subsystem 00:32:11.229 treq: not specified, sq flow control disable supported 00:32:11.229 portid: 1 00:32:11.229 trsvcid: 4420 00:32:11.229 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:11.229 traddr: 10.0.0.1 00:32:11.229 eflags: none 00:32:11.229 sectype: none 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:11.229 20:01:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:11.229 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.546 Initializing NVMe Controllers 00:32:14.546 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:14.546 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:14.546 Initialization complete. Launching workers. 00:32:14.546 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37639, failed: 0 00:32:14.546 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37639, failed to submit 0 00:32:14.546 success 0, unsuccess 37639, failed 0 00:32:14.546 20:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:14.546 20:01:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:14.546 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.823 Initializing NVMe Controllers 00:32:17.823 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:17.823 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:17.823 Initialization complete. Launching workers. 00:32:17.823 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72545, failed: 0 00:32:17.823 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18246, failed to submit 54299 00:32:17.823 success 0, unsuccess 18246, failed 0 00:32:17.823 20:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:17.823 20:01:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:17.823 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.362 Initializing NVMe Controllers 00:32:20.362 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:20.362 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:20.362 Initialization complete. Launching workers. 00:32:20.362 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70378, failed: 0 00:32:20.362 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17586, failed to submit 52792 00:32:20.362 success 0, unsuccess 17586, failed 0 00:32:20.362 20:01:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:20.362 20:01:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:20.362 20:01:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # echo 0 00:32:20.362 20:01:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:20.621 20:01:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:20.621 20:01:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@694 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:20.621 20:01:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:20.621 20:01:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # modules=(/sys/module/nvmet/holders/*) 00:32:20.621 20:01:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # modprobe -r nvmet_tcp nvmet 00:32:20.621 20:01:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:21.554 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:21.554 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:21.554 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:21.554 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:21.554 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:21.554 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:21.554 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:21.554 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:21.554 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:21.812 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:21.812 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:21.812 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:21.812 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:21.812 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:21.812 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:21.812 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:22.747 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:22.747 00:32:22.747 real 0m14.205s 00:32:22.747 user 0m5.694s 00:32:22.747 sys 0m3.331s 00:32:22.747 20:01:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # xtrace_disable 00:32:22.747 20:01:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:22.747 ************************************ 00:32:22.747 END TEST kernel_target_abort 00:32:22.747 ************************************ 00:32:22.747 20:01:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:22.747 20:01:40 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:22.747 20:01:40 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # nvmfcleanup 00:32:22.747 20:01:40 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:32:22.747 20:01:40 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:22.747 20:01:40 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:32:22.747 20:01:40 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:22.747 20:01:40 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:22.747 rmmod nvme_tcp 00:32:22.747 rmmod nvme_fabrics 00:32:22.747 rmmod nvme_keyring 00:32:23.006 20:01:40 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:23.006 20:01:40 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:32:23.006 20:01:40 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:32:23.006 20:01:40 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # '[' -n 1356711 ']' 00:32:23.006 20:01:40 nvmf_abort_qd_sizes -- nvmf/common.sh@494 -- # killprocess 1356711 00:32:23.006 20:01:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@951 -- # '[' -z 1356711 ']' 00:32:23.006 20:01:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@955 -- # kill -0 1356711 00:32:23.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 955: kill: (1356711) - No such process 00:32:23.006 20:01:40 nvmf_abort_qd_sizes -- common/autotest_common.sh@978 -- # echo 'Process with pid 1356711 is not found' 00:32:23.006 Process with pid 1356711 is not found 00:32:23.006 20:01:40 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' iso == iso ']' 00:32:23.006 20:01:40 nvmf_abort_qd_sizes -- nvmf/common.sh@497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:23.938 Waiting for block devices as requested 00:32:23.938 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:23.938 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:24.197 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:24.197 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:24.197 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:24.456 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:24.456 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:24.456 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:24.456 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:24.712 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:24.712 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:24.712 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:24.712 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:24.969 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:24.969 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:24.969 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:25.228 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:25.228 20:01:42 nvmf_abort_qd_sizes -- nvmf/common.sh@499 -- # [[ tcp == \t\c\p ]] 00:32:25.228 20:01:42 nvmf_abort_qd_sizes -- nvmf/common.sh@500 -- # nvmf_tcp_fini 00:32:25.228 20:01:42 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:25.228 20:01:42 nvmf_abort_qd_sizes -- nvmf/common.sh@282 -- # remove_spdk_ns 00:32:25.228 20:01:42 nvmf_abort_qd_sizes -- nvmf/common.sh@632 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.228 20:01:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:25.228 20:01:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.758 20:01:44 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip -4 addr flush cvl_0_1 00:32:27.758 00:32:27.758 real 0m38.239s 00:32:27.758 user 1m3.331s 00:32:27.758 sys 0m9.353s 00:32:27.758 20:01:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # xtrace_disable 00:32:27.758 20:01:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:27.758 ************************************ 00:32:27.758 END TEST nvmf_abort_qd_sizes 00:32:27.758 ************************************ 00:32:27.758 20:01:44 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:27.758 20:01:44 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:32:27.758 20:01:44 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:32:27.758 20:01:44 -- common/autotest_common.sh@10 -- # set +x 00:32:27.758 ************************************ 00:32:27.758 START TEST keyring_file 00:32:27.758 ************************************ 00:32:27.758 20:01:44 keyring_file -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:27.758 * Looking for test storage... 00:32:27.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:27.758 20:01:44 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.758 20:01:44 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.758 20:01:44 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.758 20:01:44 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.758 20:01:44 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.758 20:01:44 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.758 20:01:44 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.758 20:01:44 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:27.758 20:01:44 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@51 -- # : 0 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:27.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:27.758 20:01:44 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:27.758 20:01:44 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:27.758 20:01:44 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:27.758 20:01:44 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:27.758 20:01:44 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:27.758 20:01:44 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qHQas2nzag 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@706 -- # local prefix key digest 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@708 -- # key=00112233445566778899aabbccddeeff 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@708 -- # digest=0 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@709 -- # python - 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qHQas2nzag 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qHQas2nzag 00:32:27.758 20:01:44 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.qHQas2nzag 00:32:27.758 20:01:44 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CFe31mhI0q 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@706 -- # local prefix key digest 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@708 -- # key=112233445566778899aabbccddeeff00 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@708 -- # digest=0 00:32:27.758 20:01:44 keyring_file -- nvmf/common.sh@709 -- # python - 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CFe31mhI0q 00:32:27.758 20:01:44 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CFe31mhI0q 00:32:27.758 20:01:44 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.CFe31mhI0q 00:32:27.758 20:01:44 keyring_file -- keyring/file.sh@30 -- # tgtpid=1362472 00:32:27.758 20:01:44 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:27.758 20:01:44 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1362472 00:32:27.758 20:01:44 keyring_file -- common/autotest_common.sh@832 -- # '[' -z 1362472 ']' 00:32:27.759 20:01:44 keyring_file -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:27.759 20:01:44 keyring_file -- common/autotest_common.sh@837 -- # local max_retries=100 00:32:27.759 20:01:44 keyring_file -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:27.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:27.759 20:01:44 keyring_file -- common/autotest_common.sh@841 -- # xtrace_disable 00:32:27.759 20:01:44 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:27.759 [2024-07-24 20:01:44.768033] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:32:27.759 [2024-07-24 20:01:44.768111] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1362472 ] 00:32:27.759 EAL: No free 2048 kB hugepages reported on node 1 00:32:27.759 [2024-07-24 20:01:44.829648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:27.759 [2024-07-24 20:01:44.937066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@865 -- # return 0 00:32:28.016 20:01:45 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@562 -- # xtrace_disable 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:28.016 [2024-07-24 20:01:45.203311] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:28.016 null0 00:32:28.016 [2024-07-24 20:01:45.235363] tcp.c:1030:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:28.016 [2024-07-24 20:01:45.235879] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:28.016 [2024-07-24 20:01:45.243359] tcp.c:3812:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:32:28.016 20:01:45 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@651 -- # local es=0 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@653 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@639 -- # local arg=rpc_cmd 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@643 -- # type -t rpc_cmd 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@654 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@562 -- # xtrace_disable 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:28.016 [2024-07-24 20:01:45.255376] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:28.016 request: 00:32:28.016 { 00:32:28.016 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:28.016 "secure_channel": false, 00:32:28.016 "listen_address": { 00:32:28.016 "trtype": "tcp", 00:32:28.016 "traddr": "127.0.0.1", 00:32:28.016 "trsvcid": "4420" 00:32:28.016 }, 00:32:28.016 "method": "nvmf_subsystem_add_listener", 00:32:28.016 "req_id": 1 00:32:28.016 } 00:32:28.016 Got JSON-RPC error response 00:32:28.016 response: 00:32:28.016 { 00:32:28.016 "code": -32602, 00:32:28.016 "message": "Invalid parameters" 00:32:28.016 } 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@590 -- # [[ 1 == 0 ]] 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@654 -- # es=1 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:32:28.016 20:01:45 keyring_file -- keyring/file.sh@46 -- # bperfpid=1362488 00:32:28.016 20:01:45 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:28.016 20:01:45 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1362488 /var/tmp/bperf.sock 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@832 -- # '[' -z 1362488 ']' 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@837 -- # local max_retries=100 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:28.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@841 -- # xtrace_disable 00:32:28.016 20:01:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:28.017 [2024-07-24 20:01:45.305777] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:32:28.017 [2024-07-24 20:01:45.305853] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1362488 ] 00:32:28.017 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.017 [2024-07-24 20:01:45.370572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:28.275 [2024-07-24 20:01:45.487496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.208 20:01:46 keyring_file -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:32:29.208 20:01:46 keyring_file -- common/autotest_common.sh@865 -- # return 0 00:32:29.208 20:01:46 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qHQas2nzag 00:32:29.208 20:01:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qHQas2nzag 00:32:29.208 20:01:46 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.CFe31mhI0q 00:32:29.208 20:01:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.CFe31mhI0q 00:32:29.466 20:01:46 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:29.466 20:01:46 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:29.466 20:01:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:29.466 20:01:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:29.466 20:01:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.723 20:01:46 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.qHQas2nzag == \/\t\m\p\/\t\m\p\.\q\H\Q\a\s\2\n\z\a\g ]] 00:32:29.723 20:01:46 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:29.723 20:01:46 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:29.723 20:01:46 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:29.723 20:01:46 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.723 20:01:46 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:29.981 20:01:47 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.CFe31mhI0q == \/\t\m\p\/\t\m\p\.\C\F\e\3\1\m\h\I\0\q ]] 00:32:29.981 20:01:47 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:29.981 20:01:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:29.981 20:01:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:29.981 20:01:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:29.981 20:01:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:29.981 20:01:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:30.239 20:01:47 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:30.239 20:01:47 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:30.239 20:01:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:30.239 20:01:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:30.239 20:01:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:30.239 20:01:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:30.239 20:01:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:30.496 20:01:47 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:30.496 20:01:47 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:30.496 20:01:47 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:30.754 [2024-07-24 20:01:47.970891] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:30.754 nvme0n1 00:32:30.754 20:01:48 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:30.754 20:01:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:30.754 20:01:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:30.754 20:01:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:30.754 20:01:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:30.754 20:01:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:31.018 20:01:48 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:31.018 20:01:48 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:31.018 20:01:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:31.018 20:01:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:31.018 20:01:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.018 20:01:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.018 20:01:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:31.332 20:01:48 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:31.332 20:01:48 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:31.332 Running I/O for 1 seconds... 00:32:32.701 00:32:32.701 Latency(us) 00:32:32.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.701 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:32.701 nvme0n1 : 1.01 6957.94 27.18 0.00 0.00 18307.01 4053.52 25243.50 00:32:32.701 =================================================================================================================== 00:32:32.701 Total : 6957.94 27.18 0.00 0.00 18307.01 4053.52 25243.50 00:32:32.701 0 00:32:32.701 20:01:49 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:32.701 20:01:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:32.701 20:01:49 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:32.701 20:01:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:32.701 20:01:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:32.701 20:01:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:32.701 20:01:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.701 20:01:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:32.960 20:01:50 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:32.960 20:01:50 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:32.960 20:01:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:32.960 20:01:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:32.960 20:01:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:32.960 20:01:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:32.960 20:01:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:33.218 20:01:50 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:33.218 20:01:50 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:33.218 20:01:50 keyring_file -- common/autotest_common.sh@651 -- # local es=0 00:32:33.218 20:01:50 keyring_file -- common/autotest_common.sh@653 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:33.218 20:01:50 keyring_file -- common/autotest_common.sh@639 -- # local arg=bperf_cmd 00:32:33.218 20:01:50 keyring_file -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:32:33.218 20:01:50 keyring_file -- common/autotest_common.sh@643 -- # type -t bperf_cmd 00:32:33.218 20:01:50 keyring_file -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:32:33.218 20:01:50 keyring_file -- common/autotest_common.sh@654 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:33.218 20:01:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:33.475 [2024-07-24 20:01:50.721343] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:33.475 [2024-07-24 20:01:50.721509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6589a0 (107): Transport endpoint is not connected 00:32:33.475 [2024-07-24 20:01:50.722502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6589a0 (9): Bad file descriptor 00:32:33.475 [2024-07-24 20:01:50.723500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:33.475 [2024-07-24 20:01:50.723519] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:33.475 [2024-07-24 20:01:50.723556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:33.475 request: 00:32:33.475 { 00:32:33.475 "name": "nvme0", 00:32:33.475 "trtype": "tcp", 00:32:33.475 "traddr": "127.0.0.1", 00:32:33.475 "adrfam": "ipv4", 00:32:33.475 "trsvcid": "4420", 00:32:33.476 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:33.476 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:33.476 "prchk_reftag": false, 00:32:33.476 "prchk_guard": false, 00:32:33.476 "hdgst": false, 00:32:33.476 "ddgst": false, 00:32:33.476 "psk": "key1", 00:32:33.476 "method": "bdev_nvme_attach_controller", 00:32:33.476 "req_id": 1 00:32:33.476 } 00:32:33.476 Got JSON-RPC error response 00:32:33.476 response: 00:32:33.476 { 00:32:33.476 "code": -5, 00:32:33.476 "message": "Input/output error" 00:32:33.476 } 00:32:33.476 20:01:50 keyring_file -- common/autotest_common.sh@654 -- # es=1 00:32:33.476 20:01:50 keyring_file -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:32:33.476 20:01:50 keyring_file -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:32:33.476 20:01:50 keyring_file -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:32:33.476 20:01:50 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:33.476 20:01:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:33.476 20:01:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:33.476 20:01:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:33.476 20:01:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:33.476 20:01:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:33.733 20:01:50 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:33.733 20:01:50 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:33.733 20:01:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:33.733 20:01:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:33.733 20:01:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:33.733 20:01:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:33.733 20:01:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:33.990 20:01:51 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:33.990 20:01:51 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:33.990 20:01:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:34.246 20:01:51 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:34.246 20:01:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:34.504 20:01:51 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:34.504 20:01:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.504 20:01:51 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:34.759 20:01:51 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:34.759 20:01:51 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.qHQas2nzag 00:32:34.759 20:01:51 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.qHQas2nzag 00:32:34.759 20:01:51 keyring_file -- common/autotest_common.sh@651 -- # local es=0 00:32:34.759 20:01:51 keyring_file -- common/autotest_common.sh@653 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.qHQas2nzag 00:32:34.759 20:01:51 keyring_file -- common/autotest_common.sh@639 -- # local arg=bperf_cmd 00:32:34.759 20:01:51 keyring_file -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:32:34.759 20:01:51 keyring_file -- common/autotest_common.sh@643 -- # type -t bperf_cmd 00:32:34.759 20:01:51 keyring_file -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:32:34.759 20:01:51 keyring_file -- common/autotest_common.sh@654 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qHQas2nzag 00:32:34.760 20:01:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qHQas2nzag 00:32:35.016 [2024-07-24 20:01:52.210435] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qHQas2nzag': 0100660 00:32:35.016 [2024-07-24 20:01:52.210468] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:35.016 request: 00:32:35.016 { 00:32:35.016 "name": "key0", 00:32:35.016 "path": "/tmp/tmp.qHQas2nzag", 00:32:35.016 "method": "keyring_file_add_key", 00:32:35.016 "req_id": 1 00:32:35.016 } 00:32:35.016 Got JSON-RPC error response 00:32:35.016 response: 00:32:35.016 { 00:32:35.016 "code": -1, 00:32:35.016 "message": "Operation not permitted" 00:32:35.016 } 00:32:35.016 20:01:52 keyring_file -- common/autotest_common.sh@654 -- # es=1 00:32:35.016 20:01:52 keyring_file -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:32:35.016 20:01:52 keyring_file -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:32:35.016 20:01:52 keyring_file -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:32:35.016 20:01:52 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.qHQas2nzag 00:32:35.016 20:01:52 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.qHQas2nzag 00:32:35.016 20:01:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.qHQas2nzag 00:32:35.280 20:01:52 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.qHQas2nzag 00:32:35.280 20:01:52 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:35.280 20:01:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:35.280 20:01:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:35.280 20:01:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:35.280 20:01:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:35.280 20:01:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:35.537 20:01:52 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:35.537 20:01:52 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:35.537 20:01:52 keyring_file -- common/autotest_common.sh@651 -- # local es=0 00:32:35.537 20:01:52 keyring_file -- common/autotest_common.sh@653 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:35.537 20:01:52 keyring_file -- common/autotest_common.sh@639 -- # local arg=bperf_cmd 00:32:35.537 20:01:52 keyring_file -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:32:35.537 20:01:52 keyring_file -- common/autotest_common.sh@643 -- # type -t bperf_cmd 00:32:35.537 20:01:52 keyring_file -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:32:35.537 20:01:52 keyring_file -- common/autotest_common.sh@654 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:35.537 20:01:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:35.795 [2024-07-24 20:01:52.972514] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.qHQas2nzag': No such file or directory 00:32:35.795 [2024-07-24 20:01:52.972576] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:35.795 [2024-07-24 20:01:52.972619] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:35.795 [2024-07-24 20:01:52.972633] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:35.795 [2024-07-24 20:01:52.972645] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:35.795 request: 00:32:35.795 { 00:32:35.795 "name": "nvme0", 00:32:35.795 "trtype": "tcp", 00:32:35.795 "traddr": "127.0.0.1", 00:32:35.795 "adrfam": "ipv4", 00:32:35.795 "trsvcid": "4420", 00:32:35.795 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:35.795 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:35.795 "prchk_reftag": false, 00:32:35.795 "prchk_guard": false, 00:32:35.795 "hdgst": false, 00:32:35.795 "ddgst": false, 00:32:35.795 "psk": "key0", 00:32:35.795 "method": "bdev_nvme_attach_controller", 00:32:35.795 "req_id": 1 00:32:35.795 } 00:32:35.795 Got JSON-RPC error response 00:32:35.795 response: 00:32:35.795 { 00:32:35.795 "code": -19, 00:32:35.795 "message": "No such device" 00:32:35.795 } 00:32:35.795 20:01:52 keyring_file -- common/autotest_common.sh@654 -- # es=1 00:32:35.795 20:01:52 keyring_file -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:32:35.795 20:01:52 keyring_file -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:32:35.795 20:01:52 keyring_file -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:32:35.795 20:01:52 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:35.795 20:01:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:36.052 20:01:53 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:36.052 20:01:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:36.052 20:01:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:36.052 20:01:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:36.052 20:01:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:36.052 20:01:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:36.052 20:01:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.NtX6qxxlqG 00:32:36.052 20:01:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:36.052 20:01:53 keyring_file -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:36.052 20:01:53 keyring_file -- nvmf/common.sh@706 -- # local prefix key digest 00:32:36.052 20:01:53 keyring_file -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:32:36.052 20:01:53 keyring_file -- nvmf/common.sh@708 -- # key=00112233445566778899aabbccddeeff 00:32:36.052 20:01:53 keyring_file -- nvmf/common.sh@708 -- # digest=0 00:32:36.052 20:01:53 keyring_file -- nvmf/common.sh@709 -- # python - 00:32:36.052 20:01:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.NtX6qxxlqG 00:32:36.052 20:01:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.NtX6qxxlqG 00:32:36.052 20:01:53 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.NtX6qxxlqG 00:32:36.052 20:01:53 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NtX6qxxlqG 00:32:36.052 20:01:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NtX6qxxlqG 00:32:36.308 20:01:53 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:36.308 20:01:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:36.565 nvme0n1 00:32:36.565 20:01:53 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:36.565 20:01:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:36.565 20:01:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:36.565 20:01:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:36.565 20:01:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.565 20:01:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:36.822 20:01:54 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:36.822 20:01:54 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:36.823 20:01:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:37.082 20:01:54 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:37.082 20:01:54 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:37.082 20:01:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:37.082 20:01:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:37.082 20:01:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:37.342 20:01:54 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:37.342 20:01:54 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:37.342 20:01:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:37.342 20:01:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:37.342 20:01:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:37.342 20:01:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:37.342 20:01:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:37.599 20:01:54 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:37.599 20:01:54 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:37.599 20:01:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:37.856 20:01:55 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:37.856 20:01:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:37.856 20:01:55 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:38.112 20:01:55 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:38.112 20:01:55 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.NtX6qxxlqG 00:32:38.112 20:01:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.NtX6qxxlqG 00:32:38.368 20:01:55 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.CFe31mhI0q 00:32:38.368 20:01:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.CFe31mhI0q 00:32:38.626 20:01:55 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:38.626 20:01:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:38.883 nvme0n1 00:32:38.883 20:01:56 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:38.883 20:01:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:39.140 20:01:56 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:39.140 "subsystems": [ 00:32:39.140 { 00:32:39.140 "subsystem": "keyring", 00:32:39.140 "config": [ 00:32:39.140 { 00:32:39.140 "method": "keyring_file_add_key", 00:32:39.140 "params": { 00:32:39.140 "name": "key0", 00:32:39.140 "path": "/tmp/tmp.NtX6qxxlqG" 00:32:39.140 } 00:32:39.140 }, 00:32:39.140 { 00:32:39.140 "method": "keyring_file_add_key", 00:32:39.140 "params": { 00:32:39.140 "name": "key1", 00:32:39.140 "path": "/tmp/tmp.CFe31mhI0q" 00:32:39.140 } 00:32:39.140 } 00:32:39.140 ] 00:32:39.140 }, 00:32:39.140 { 00:32:39.140 "subsystem": "iobuf", 00:32:39.140 "config": [ 00:32:39.140 { 00:32:39.140 "method": "iobuf_set_options", 00:32:39.140 "params": { 00:32:39.140 "small_pool_count": 8192, 00:32:39.140 "large_pool_count": 1024, 00:32:39.140 "small_bufsize": 8192, 00:32:39.140 "large_bufsize": 135168 00:32:39.140 } 00:32:39.140 } 00:32:39.140 ] 00:32:39.140 }, 00:32:39.140 { 00:32:39.140 "subsystem": "sock", 00:32:39.140 "config": [ 00:32:39.140 { 00:32:39.140 "method": "sock_set_default_impl", 00:32:39.140 "params": { 00:32:39.140 "impl_name": "posix" 00:32:39.140 } 00:32:39.140 }, 00:32:39.140 { 00:32:39.140 "method": "sock_impl_set_options", 00:32:39.140 "params": { 00:32:39.140 "impl_name": "ssl", 00:32:39.140 "recv_buf_size": 4096, 00:32:39.140 "send_buf_size": 4096, 00:32:39.140 "enable_recv_pipe": true, 00:32:39.140 "enable_quickack": false, 00:32:39.140 "enable_placement_id": 0, 00:32:39.140 "enable_zerocopy_send_server": true, 00:32:39.140 "enable_zerocopy_send_client": false, 00:32:39.140 "zerocopy_threshold": 0, 00:32:39.140 "tls_version": 0, 00:32:39.140 "enable_ktls": false 00:32:39.140 } 00:32:39.140 }, 00:32:39.140 { 00:32:39.140 "method": "sock_impl_set_options", 00:32:39.140 "params": { 00:32:39.140 "impl_name": "posix", 00:32:39.140 "recv_buf_size": 2097152, 00:32:39.140 "send_buf_size": 2097152, 00:32:39.140 "enable_recv_pipe": true, 00:32:39.140 "enable_quickack": false, 00:32:39.140 "enable_placement_id": 0, 00:32:39.140 "enable_zerocopy_send_server": true, 00:32:39.140 "enable_zerocopy_send_client": false, 00:32:39.140 "zerocopy_threshold": 0, 00:32:39.140 "tls_version": 0, 00:32:39.140 "enable_ktls": false 00:32:39.140 } 00:32:39.140 } 00:32:39.140 ] 00:32:39.140 }, 00:32:39.140 { 00:32:39.140 "subsystem": "vmd", 00:32:39.140 "config": [] 00:32:39.140 }, 00:32:39.140 { 00:32:39.140 "subsystem": "accel", 00:32:39.140 "config": [ 00:32:39.140 { 00:32:39.140 "method": "accel_set_options", 00:32:39.140 "params": { 00:32:39.140 "small_cache_size": 128, 00:32:39.140 "large_cache_size": 16, 00:32:39.140 "task_count": 2048, 00:32:39.140 "sequence_count": 2048, 00:32:39.140 "buf_count": 2048 00:32:39.140 } 00:32:39.140 } 00:32:39.140 ] 00:32:39.140 }, 00:32:39.140 { 00:32:39.140 "subsystem": "bdev", 00:32:39.140 "config": [ 00:32:39.140 { 00:32:39.140 "method": "bdev_set_options", 00:32:39.140 "params": { 00:32:39.140 "bdev_io_pool_size": 65535, 00:32:39.140 "bdev_io_cache_size": 256, 00:32:39.140 "bdev_auto_examine": true, 00:32:39.140 "iobuf_small_cache_size": 128, 00:32:39.140 "iobuf_large_cache_size": 16 00:32:39.140 } 00:32:39.140 }, 00:32:39.140 { 00:32:39.140 "method": "bdev_raid_set_options", 00:32:39.140 "params": { 00:32:39.140 "process_window_size_kb": 1024, 00:32:39.140 "process_max_bandwidth_mb_sec": 0 00:32:39.140 } 00:32:39.140 }, 00:32:39.140 { 00:32:39.140 "method": "bdev_iscsi_set_options", 00:32:39.140 "params": { 00:32:39.140 "timeout_sec": 30 00:32:39.140 } 00:32:39.140 }, 00:32:39.140 { 00:32:39.140 "method": "bdev_nvme_set_options", 00:32:39.140 "params": { 00:32:39.140 "action_on_timeout": "none", 00:32:39.140 "timeout_us": 0, 00:32:39.140 "timeout_admin_us": 0, 00:32:39.140 "keep_alive_timeout_ms": 10000, 00:32:39.140 "arbitration_burst": 0, 00:32:39.140 "low_priority_weight": 0, 00:32:39.140 "medium_priority_weight": 0, 00:32:39.140 "high_priority_weight": 0, 00:32:39.140 "nvme_adminq_poll_period_us": 10000, 00:32:39.140 "nvme_ioq_poll_period_us": 0, 00:32:39.140 "io_queue_requests": 512, 00:32:39.140 "delay_cmd_submit": true, 00:32:39.140 "transport_retry_count": 4, 00:32:39.140 "bdev_retry_count": 3, 00:32:39.140 "transport_ack_timeout": 0, 00:32:39.140 "ctrlr_loss_timeout_sec": 0, 00:32:39.140 "reconnect_delay_sec": 0, 00:32:39.140 "fast_io_fail_timeout_sec": 0, 00:32:39.140 "disable_auto_failback": false, 00:32:39.140 "generate_uuids": false, 00:32:39.140 "transport_tos": 0, 00:32:39.140 "nvme_error_stat": false, 00:32:39.141 "rdma_srq_size": 0, 00:32:39.141 "io_path_stat": false, 00:32:39.141 "allow_accel_sequence": false, 00:32:39.141 "rdma_max_cq_size": 0, 00:32:39.141 "rdma_cm_event_timeout_ms": 0, 00:32:39.141 "dhchap_digests": [ 00:32:39.141 "sha256", 00:32:39.141 "sha384", 00:32:39.141 "sha512" 00:32:39.141 ], 00:32:39.141 "dhchap_dhgroups": [ 00:32:39.141 "null", 00:32:39.141 "ffdhe2048", 00:32:39.141 "ffdhe3072", 00:32:39.141 "ffdhe4096", 00:32:39.141 "ffdhe6144", 00:32:39.141 "ffdhe8192" 00:32:39.141 ] 00:32:39.141 } 00:32:39.141 }, 00:32:39.141 { 00:32:39.141 "method": "bdev_nvme_attach_controller", 00:32:39.141 "params": { 00:32:39.141 "name": "nvme0", 00:32:39.141 "trtype": "TCP", 00:32:39.141 "adrfam": "IPv4", 00:32:39.141 "traddr": "127.0.0.1", 00:32:39.141 "trsvcid": "4420", 00:32:39.141 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:39.141 "prchk_reftag": false, 00:32:39.141 "prchk_guard": false, 00:32:39.141 "ctrlr_loss_timeout_sec": 0, 00:32:39.141 "reconnect_delay_sec": 0, 00:32:39.141 "fast_io_fail_timeout_sec": 0, 00:32:39.141 "psk": "key0", 00:32:39.141 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:39.141 "hdgst": false, 00:32:39.141 "ddgst": false 00:32:39.141 } 00:32:39.141 }, 00:32:39.141 { 00:32:39.141 "method": "bdev_nvme_set_hotplug", 00:32:39.141 "params": { 00:32:39.141 "period_us": 100000, 00:32:39.141 "enable": false 00:32:39.141 } 00:32:39.141 }, 00:32:39.141 { 00:32:39.141 "method": "bdev_wait_for_examine" 00:32:39.141 } 00:32:39.141 ] 00:32:39.141 }, 00:32:39.141 { 00:32:39.141 "subsystem": "nbd", 00:32:39.141 "config": [] 00:32:39.141 } 00:32:39.141 ] 00:32:39.141 }' 00:32:39.141 20:01:56 keyring_file -- keyring/file.sh@114 -- # killprocess 1362488 00:32:39.141 20:01:56 keyring_file -- common/autotest_common.sh@951 -- # '[' -z 1362488 ']' 00:32:39.141 20:01:56 keyring_file -- common/autotest_common.sh@955 -- # kill -0 1362488 00:32:39.141 20:01:56 keyring_file -- common/autotest_common.sh@956 -- # uname 00:32:39.141 20:01:56 keyring_file -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:32:39.141 20:01:56 keyring_file -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1362488 00:32:39.141 20:01:56 keyring_file -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:32:39.141 20:01:56 keyring_file -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:32:39.141 20:01:56 keyring_file -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1362488' 00:32:39.141 killing process with pid 1362488 00:32:39.141 20:01:56 keyring_file -- common/autotest_common.sh@970 -- # kill 1362488 00:32:39.141 Received shutdown signal, test time was about 1.000000 seconds 00:32:39.141 00:32:39.141 Latency(us) 00:32:39.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.141 =================================================================================================================== 00:32:39.141 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:39.141 20:01:56 keyring_file -- common/autotest_common.sh@975 -- # wait 1362488 00:32:39.399 20:01:56 keyring_file -- keyring/file.sh@117 -- # bperfpid=1363956 00:32:39.399 20:01:56 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1363956 /var/tmp/bperf.sock 00:32:39.399 20:01:56 keyring_file -- common/autotest_common.sh@832 -- # '[' -z 1363956 ']' 00:32:39.399 20:01:56 keyring_file -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:39.399 20:01:56 keyring_file -- common/autotest_common.sh@837 -- # local max_retries=100 00:32:39.399 20:01:56 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:39.399 20:01:56 keyring_file -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:39.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:39.399 20:01:56 keyring_file -- common/autotest_common.sh@841 -- # xtrace_disable 00:32:39.399 20:01:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:39.399 20:01:56 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:39.399 "subsystems": [ 00:32:39.399 { 00:32:39.399 "subsystem": "keyring", 00:32:39.399 "config": [ 00:32:39.399 { 00:32:39.399 "method": "keyring_file_add_key", 00:32:39.399 "params": { 00:32:39.399 "name": "key0", 00:32:39.399 "path": "/tmp/tmp.NtX6qxxlqG" 00:32:39.399 } 00:32:39.399 }, 00:32:39.399 { 00:32:39.399 "method": "keyring_file_add_key", 00:32:39.399 "params": { 00:32:39.399 "name": "key1", 00:32:39.399 "path": "/tmp/tmp.CFe31mhI0q" 00:32:39.399 } 00:32:39.399 } 00:32:39.399 ] 00:32:39.399 }, 00:32:39.399 { 00:32:39.399 "subsystem": "iobuf", 00:32:39.399 "config": [ 00:32:39.399 { 00:32:39.399 "method": "iobuf_set_options", 00:32:39.399 "params": { 00:32:39.399 "small_pool_count": 8192, 00:32:39.399 "large_pool_count": 1024, 00:32:39.399 "small_bufsize": 8192, 00:32:39.399 "large_bufsize": 135168 00:32:39.399 } 00:32:39.399 } 00:32:39.399 ] 00:32:39.399 }, 00:32:39.399 { 00:32:39.399 "subsystem": "sock", 00:32:39.399 "config": [ 00:32:39.399 { 00:32:39.399 "method": "sock_set_default_impl", 00:32:39.399 "params": { 00:32:39.399 "impl_name": "posix" 00:32:39.399 } 00:32:39.399 }, 00:32:39.399 { 00:32:39.399 "method": "sock_impl_set_options", 00:32:39.399 "params": { 00:32:39.399 "impl_name": "ssl", 00:32:39.399 "recv_buf_size": 4096, 00:32:39.399 "send_buf_size": 4096, 00:32:39.399 "enable_recv_pipe": true, 00:32:39.399 "enable_quickack": false, 00:32:39.399 "enable_placement_id": 0, 00:32:39.399 "enable_zerocopy_send_server": true, 00:32:39.399 "enable_zerocopy_send_client": false, 00:32:39.399 "zerocopy_threshold": 0, 00:32:39.399 "tls_version": 0, 00:32:39.399 "enable_ktls": false 00:32:39.399 } 00:32:39.399 }, 00:32:39.399 { 00:32:39.399 "method": "sock_impl_set_options", 00:32:39.399 "params": { 00:32:39.399 "impl_name": "posix", 00:32:39.399 "recv_buf_size": 2097152, 00:32:39.399 "send_buf_size": 2097152, 00:32:39.399 "enable_recv_pipe": true, 00:32:39.399 "enable_quickack": false, 00:32:39.399 "enable_placement_id": 0, 00:32:39.399 "enable_zerocopy_send_server": true, 00:32:39.399 "enable_zerocopy_send_client": false, 00:32:39.399 "zerocopy_threshold": 0, 00:32:39.399 "tls_version": 0, 00:32:39.399 "enable_ktls": false 00:32:39.399 } 00:32:39.399 } 00:32:39.399 ] 00:32:39.399 }, 00:32:39.399 { 00:32:39.399 "subsystem": "vmd", 00:32:39.399 "config": [] 00:32:39.399 }, 00:32:39.399 { 00:32:39.399 "subsystem": "accel", 00:32:39.399 "config": [ 00:32:39.399 { 00:32:39.399 "method": "accel_set_options", 00:32:39.399 "params": { 00:32:39.399 "small_cache_size": 128, 00:32:39.400 "large_cache_size": 16, 00:32:39.400 "task_count": 2048, 00:32:39.400 "sequence_count": 2048, 00:32:39.400 "buf_count": 2048 00:32:39.400 } 00:32:39.400 } 00:32:39.400 ] 00:32:39.400 }, 00:32:39.400 { 00:32:39.400 "subsystem": "bdev", 00:32:39.400 "config": [ 00:32:39.400 { 00:32:39.400 "method": "bdev_set_options", 00:32:39.400 "params": { 00:32:39.400 "bdev_io_pool_size": 65535, 00:32:39.400 "bdev_io_cache_size": 256, 00:32:39.400 "bdev_auto_examine": true, 00:32:39.400 "iobuf_small_cache_size": 128, 00:32:39.400 "iobuf_large_cache_size": 16 00:32:39.400 } 00:32:39.400 }, 00:32:39.400 { 00:32:39.400 "method": "bdev_raid_set_options", 00:32:39.400 "params": { 00:32:39.400 "process_window_size_kb": 1024, 00:32:39.400 "process_max_bandwidth_mb_sec": 0 00:32:39.400 } 00:32:39.400 }, 00:32:39.400 { 00:32:39.400 "method": "bdev_iscsi_set_options", 00:32:39.400 "params": { 00:32:39.400 "timeout_sec": 30 00:32:39.400 } 00:32:39.400 }, 00:32:39.400 { 00:32:39.400 "method": "bdev_nvme_set_options", 00:32:39.400 "params": { 00:32:39.400 "action_on_timeout": "none", 00:32:39.400 "timeout_us": 0, 00:32:39.400 "timeout_admin_us": 0, 00:32:39.400 "keep_alive_timeout_ms": 10000, 00:32:39.400 "arbitration_burst": 0, 00:32:39.400 "low_priority_weight": 0, 00:32:39.400 "medium_priority_weight": 0, 00:32:39.400 "high_priority_weight": 0, 00:32:39.400 "nvme_adminq_poll_period_us": 10000, 00:32:39.400 "nvme_ioq_poll_period_us": 0, 00:32:39.400 "io_queue_requests": 512, 00:32:39.400 "delay_cmd_submit": true, 00:32:39.400 "transport_retry_count": 4, 00:32:39.400 "bdev_retry_count": 3, 00:32:39.400 "transport_ack_timeout": 0, 00:32:39.400 "ctrlr_loss_timeout_sec": 0, 00:32:39.400 "reconnect_delay_sec": 0, 00:32:39.400 "fast_io_fail_timeout_sec": 0, 00:32:39.400 "disable_auto_failback": false, 00:32:39.400 "generate_uuids": false, 00:32:39.400 "transport_tos": 0, 00:32:39.400 "nvme_error_stat": false, 00:32:39.400 "rdma_srq_size": 0, 00:32:39.400 "io_path_stat": false, 00:32:39.400 "allow_accel_sequence": false, 00:32:39.400 "rdma_max_cq_size": 0, 00:32:39.400 "rdma_cm_event_timeout_ms": 0, 00:32:39.400 "dhchap_digests": [ 00:32:39.400 "sha256", 00:32:39.400 "sha384", 00:32:39.400 "sha512" 00:32:39.400 ], 00:32:39.400 "dhchap_dhgroups": [ 00:32:39.400 "null", 00:32:39.400 "ffdhe2048", 00:32:39.400 "ffdhe3072", 00:32:39.400 "ffdhe4096", 00:32:39.400 "ffdhe6144", 00:32:39.400 "ffdhe8192" 00:32:39.400 ] 00:32:39.400 } 00:32:39.400 }, 00:32:39.400 { 00:32:39.400 "method": "bdev_nvme_attach_controller", 00:32:39.400 "params": { 00:32:39.400 "name": "nvme0", 00:32:39.400 "trtype": "TCP", 00:32:39.400 "adrfam": "IPv4", 00:32:39.400 "traddr": "127.0.0.1", 00:32:39.400 "trsvcid": "4420", 00:32:39.400 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:39.400 "prchk_reftag": false, 00:32:39.400 "prchk_guard": false, 00:32:39.400 "ctrlr_loss_timeout_sec": 0, 00:32:39.400 "reconnect_delay_sec": 0, 00:32:39.400 "fast_io_fail_timeout_sec": 0, 00:32:39.400 "psk": "key0", 00:32:39.400 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:39.400 "hdgst": false, 00:32:39.400 "ddgst": false 00:32:39.400 } 00:32:39.400 }, 00:32:39.400 { 00:32:39.400 "method": "bdev_nvme_set_hotplug", 00:32:39.400 "params": { 00:32:39.400 "period_us": 100000, 00:32:39.400 "enable": false 00:32:39.400 } 00:32:39.400 }, 00:32:39.400 { 00:32:39.400 "method": "bdev_wait_for_examine" 00:32:39.400 } 00:32:39.400 ] 00:32:39.400 }, 00:32:39.400 { 00:32:39.400 "subsystem": "nbd", 00:32:39.400 "config": [] 00:32:39.400 } 00:32:39.400 ] 00:32:39.400 }' 00:32:39.657 [2024-07-24 20:01:56.786336] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:32:39.657 [2024-07-24 20:01:56.786434] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1363956 ] 00:32:39.657 EAL: No free 2048 kB hugepages reported on node 1 00:32:39.657 [2024-07-24 20:01:56.851745] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.657 [2024-07-24 20:01:56.974183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.915 [2024-07-24 20:01:57.159335] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:40.482 20:01:57 keyring_file -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:32:40.482 20:01:57 keyring_file -- common/autotest_common.sh@865 -- # return 0 00:32:40.482 20:01:57 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:40.482 20:01:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.482 20:01:57 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:40.740 20:01:57 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:40.740 20:01:57 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:40.740 20:01:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:40.740 20:01:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:40.740 20:01:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:40.740 20:01:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:40.740 20:01:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.998 20:01:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:40.998 20:01:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:40.998 20:01:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:40.998 20:01:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:40.998 20:01:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:40.998 20:01:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.998 20:01:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:41.255 20:01:58 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:41.255 20:01:58 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:41.255 20:01:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:41.255 20:01:58 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:41.513 20:01:58 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:41.513 20:01:58 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:41.513 20:01:58 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.NtX6qxxlqG /tmp/tmp.CFe31mhI0q 00:32:41.513 20:01:58 keyring_file -- keyring/file.sh@20 -- # killprocess 1363956 00:32:41.513 20:01:58 keyring_file -- common/autotest_common.sh@951 -- # '[' -z 1363956 ']' 00:32:41.513 20:01:58 keyring_file -- common/autotest_common.sh@955 -- # kill -0 1363956 00:32:41.513 20:01:58 keyring_file -- common/autotest_common.sh@956 -- # uname 00:32:41.513 20:01:58 keyring_file -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:32:41.513 20:01:58 keyring_file -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1363956 00:32:41.513 20:01:58 keyring_file -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:32:41.513 20:01:58 keyring_file -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:32:41.513 20:01:58 keyring_file -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1363956' 00:32:41.513 killing process with pid 1363956 00:32:41.513 20:01:58 keyring_file -- common/autotest_common.sh@970 -- # kill 1363956 00:32:41.513 Received shutdown signal, test time was about 1.000000 seconds 00:32:41.513 00:32:41.513 Latency(us) 00:32:41.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.513 =================================================================================================================== 00:32:41.513 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:41.513 20:01:58 keyring_file -- common/autotest_common.sh@975 -- # wait 1363956 00:32:41.772 20:01:59 keyring_file -- keyring/file.sh@21 -- # killprocess 1362472 00:32:41.772 20:01:59 keyring_file -- common/autotest_common.sh@951 -- # '[' -z 1362472 ']' 00:32:41.772 20:01:59 keyring_file -- common/autotest_common.sh@955 -- # kill -0 1362472 00:32:41.772 20:01:59 keyring_file -- common/autotest_common.sh@956 -- # uname 00:32:41.772 20:01:59 keyring_file -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:32:41.772 20:01:59 keyring_file -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1362472 00:32:41.772 20:01:59 keyring_file -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:32:41.772 20:01:59 keyring_file -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:32:41.772 20:01:59 keyring_file -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1362472' 00:32:41.772 killing process with pid 1362472 00:32:41.772 20:01:59 keyring_file -- common/autotest_common.sh@970 -- # kill 1362472 00:32:41.772 [2024-07-24 20:01:59.078994] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:41.772 20:01:59 keyring_file -- common/autotest_common.sh@975 -- # wait 1362472 00:32:42.336 00:32:42.336 real 0m14.942s 00:32:42.336 user 0m36.889s 00:32:42.336 sys 0m3.415s 00:32:42.336 20:01:59 keyring_file -- common/autotest_common.sh@1127 -- # xtrace_disable 00:32:42.336 20:01:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:42.336 ************************************ 00:32:42.336 END TEST keyring_file 00:32:42.336 ************************************ 00:32:42.336 20:01:59 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:42.336 20:01:59 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:42.337 20:01:59 -- common/autotest_common.sh@1102 -- # '[' 2 -le 1 ']' 00:32:42.337 20:01:59 -- common/autotest_common.sh@1108 -- # xtrace_disable 00:32:42.337 20:01:59 -- common/autotest_common.sh@10 -- # set +x 00:32:42.337 ************************************ 00:32:42.337 START TEST keyring_linux 00:32:42.337 ************************************ 00:32:42.337 20:01:59 keyring_linux -- common/autotest_common.sh@1126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:42.337 * Looking for test storage... 00:32:42.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:42.337 20:01:59 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:42.337 20:01:59 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:42.337 20:01:59 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:42.337 20:01:59 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:42.337 20:01:59 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.337 20:01:59 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.337 20:01:59 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.337 20:01:59 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:42.337 20:01:59 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:42.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:42.337 20:01:59 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:42.337 20:01:59 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:42.337 20:01:59 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:42.337 20:01:59 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:42.337 20:01:59 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:42.337 20:01:59 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@706 -- # local prefix key digest 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@708 -- # key=00112233445566778899aabbccddeeff 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@708 -- # digest=0 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@709 -- # python - 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:42.337 /tmp/:spdk-test:key0 00:32:42.337 20:01:59 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:42.337 20:01:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@719 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@706 -- # local prefix key digest 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@708 -- # prefix=NVMeTLSkey-1 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@708 -- # key=112233445566778899aabbccddeeff00 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@708 -- # digest=0 00:32:42.337 20:01:59 keyring_linux -- nvmf/common.sh@709 -- # python - 00:32:42.596 20:01:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:42.596 20:01:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:42.596 /tmp/:spdk-test:key1 00:32:42.596 20:01:59 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1364437 00:32:42.596 20:01:59 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:42.596 20:01:59 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1364437 00:32:42.596 20:01:59 keyring_linux -- common/autotest_common.sh@832 -- # '[' -z 1364437 ']' 00:32:42.596 20:01:59 keyring_linux -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.596 20:01:59 keyring_linux -- common/autotest_common.sh@837 -- # local max_retries=100 00:32:42.596 20:01:59 keyring_linux -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.596 20:01:59 keyring_linux -- common/autotest_common.sh@841 -- # xtrace_disable 00:32:42.596 20:01:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:42.596 [2024-07-24 20:01:59.772082] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:32:42.596 [2024-07-24 20:01:59.772177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1364437 ] 00:32:42.596 EAL: No free 2048 kB hugepages reported on node 1 00:32:42.596 [2024-07-24 20:01:59.838798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.596 [2024-07-24 20:01:59.956273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.531 20:02:00 keyring_linux -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:32:43.531 20:02:00 keyring_linux -- common/autotest_common.sh@865 -- # return 0 00:32:43.531 20:02:00 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:43.531 20:02:00 keyring_linux -- common/autotest_common.sh@562 -- # xtrace_disable 00:32:43.531 20:02:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:43.531 [2024-07-24 20:02:00.705314] tcp.c: 737:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.531 null0 00:32:43.531 [2024-07-24 20:02:00.737371] tcp.c:1030:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:43.531 [2024-07-24 20:02:00.737878] tcp.c:1080:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:43.531 20:02:00 keyring_linux -- common/autotest_common.sh@590 -- # [[ 0 == 0 ]] 00:32:43.531 20:02:00 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:43.531 803407200 00:32:43.531 20:02:00 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:43.531 712418005 00:32:43.531 20:02:00 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1364573 00:32:43.531 20:02:00 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1364573 /var/tmp/bperf.sock 00:32:43.531 20:02:00 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:43.531 20:02:00 keyring_linux -- common/autotest_common.sh@832 -- # '[' -z 1364573 ']' 00:32:43.531 20:02:00 keyring_linux -- common/autotest_common.sh@836 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:43.531 20:02:00 keyring_linux -- common/autotest_common.sh@837 -- # local max_retries=100 00:32:43.531 20:02:00 keyring_linux -- common/autotest_common.sh@839 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:43.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:43.531 20:02:00 keyring_linux -- common/autotest_common.sh@841 -- # xtrace_disable 00:32:43.531 20:02:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:43.531 [2024-07-24 20:02:00.808530] Starting SPDK v24.09-pre git sha1 29c5e1f47 / DPDK 24.03.0 initialization... 00:32:43.531 [2024-07-24 20:02:00.808616] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1364573 ] 00:32:43.531 EAL: No free 2048 kB hugepages reported on node 1 00:32:43.531 [2024-07-24 20:02:00.870345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.789 [2024-07-24 20:02:00.990001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:43.789 20:02:01 keyring_linux -- common/autotest_common.sh@861 -- # (( i == 0 )) 00:32:43.789 20:02:01 keyring_linux -- common/autotest_common.sh@865 -- # return 0 00:32:43.789 20:02:01 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:43.790 20:02:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:44.048 20:02:01 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:44.048 20:02:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:44.306 20:02:01 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:44.306 20:02:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:44.564 [2024-07-24 20:02:01.806922] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:44.564 nvme0n1 00:32:44.564 20:02:01 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:44.564 20:02:01 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:44.564 20:02:01 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:44.564 20:02:01 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:44.564 20:02:01 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:44.564 20:02:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:44.821 20:02:02 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:44.821 20:02:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:44.821 20:02:02 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:44.821 20:02:02 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:44.821 20:02:02 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:44.821 20:02:02 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:44.821 20:02:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:45.079 20:02:02 keyring_linux -- keyring/linux.sh@25 -- # sn=803407200 00:32:45.079 20:02:02 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:45.079 20:02:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:45.079 20:02:02 keyring_linux -- keyring/linux.sh@26 -- # [[ 803407200 == \8\0\3\4\0\7\2\0\0 ]] 00:32:45.079 20:02:02 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 803407200 00:32:45.080 20:02:02 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:45.080 20:02:02 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:45.344 Running I/O for 1 seconds... 00:32:46.353 00:32:46.353 Latency(us) 00:32:46.353 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.353 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:46.353 nvme0n1 : 1.01 6711.27 26.22 0.00 0.00 18946.80 5995.33 26408.58 00:32:46.353 =================================================================================================================== 00:32:46.353 Total : 6711.27 26.22 0.00 0.00 18946.80 5995.33 26408.58 00:32:46.353 0 00:32:46.353 20:02:03 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:46.353 20:02:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:46.612 20:02:03 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:46.612 20:02:03 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:46.612 20:02:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:46.612 20:02:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:46.612 20:02:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:46.612 20:02:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:46.870 20:02:04 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:46.870 20:02:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:46.870 20:02:04 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:46.870 20:02:04 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:46.870 20:02:04 keyring_linux -- common/autotest_common.sh@651 -- # local es=0 00:32:46.871 20:02:04 keyring_linux -- common/autotest_common.sh@653 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:46.871 20:02:04 keyring_linux -- common/autotest_common.sh@639 -- # local arg=bperf_cmd 00:32:46.871 20:02:04 keyring_linux -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:32:46.871 20:02:04 keyring_linux -- common/autotest_common.sh@643 -- # type -t bperf_cmd 00:32:46.871 20:02:04 keyring_linux -- common/autotest_common.sh@643 -- # case "$(type -t "$arg")" in 00:32:46.871 20:02:04 keyring_linux -- common/autotest_common.sh@654 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:46.871 20:02:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:47.129 [2024-07-24 20:02:04.274899] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 431:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:47.129 [2024-07-24 20:02:04.274979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ec030 (107): Transport endpoint is not connected 00:32:47.129 [2024-07-24 20:02:04.275967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ec030 (9): Bad file descriptor 00:32:47.129 [2024-07-24 20:02:04.276965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:47.129 [2024-07-24 20:02:04.276990] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:47.129 [2024-07-24 20:02:04.277006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:47.129 request: 00:32:47.129 { 00:32:47.129 "name": "nvme0", 00:32:47.129 "trtype": "tcp", 00:32:47.129 "traddr": "127.0.0.1", 00:32:47.129 "adrfam": "ipv4", 00:32:47.129 "trsvcid": "4420", 00:32:47.129 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:47.129 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:47.129 "prchk_reftag": false, 00:32:47.129 "prchk_guard": false, 00:32:47.129 "hdgst": false, 00:32:47.129 "ddgst": false, 00:32:47.129 "psk": ":spdk-test:key1", 00:32:47.129 "method": "bdev_nvme_attach_controller", 00:32:47.129 "req_id": 1 00:32:47.129 } 00:32:47.129 Got JSON-RPC error response 00:32:47.129 response: 00:32:47.129 { 00:32:47.129 "code": -5, 00:32:47.129 "message": "Input/output error" 00:32:47.129 } 00:32:47.129 20:02:04 keyring_linux -- common/autotest_common.sh@654 -- # es=1 00:32:47.129 20:02:04 keyring_linux -- common/autotest_common.sh@662 -- # (( es > 128 )) 00:32:47.129 20:02:04 keyring_linux -- common/autotest_common.sh@673 -- # [[ -n '' ]] 00:32:47.129 20:02:04 keyring_linux -- common/autotest_common.sh@678 -- # (( !es == 0 )) 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@33 -- # sn=803407200 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 803407200 00:32:47.129 1 links removed 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@33 -- # sn=712418005 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 712418005 00:32:47.129 1 links removed 00:32:47.129 20:02:04 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1364573 00:32:47.129 20:02:04 keyring_linux -- common/autotest_common.sh@951 -- # '[' -z 1364573 ']' 00:32:47.129 20:02:04 keyring_linux -- common/autotest_common.sh@955 -- # kill -0 1364573 00:32:47.129 20:02:04 keyring_linux -- common/autotest_common.sh@956 -- # uname 00:32:47.129 20:02:04 keyring_linux -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:32:47.129 20:02:04 keyring_linux -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1364573 00:32:47.129 20:02:04 keyring_linux -- common/autotest_common.sh@957 -- # process_name=reactor_1 00:32:47.129 20:02:04 keyring_linux -- common/autotest_common.sh@961 -- # '[' reactor_1 = sudo ']' 00:32:47.130 20:02:04 keyring_linux -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1364573' 00:32:47.130 killing process with pid 1364573 00:32:47.130 20:02:04 keyring_linux -- common/autotest_common.sh@970 -- # kill 1364573 00:32:47.130 Received shutdown signal, test time was about 1.000000 seconds 00:32:47.130 00:32:47.130 Latency(us) 00:32:47.130 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.130 =================================================================================================================== 00:32:47.130 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:47.130 20:02:04 keyring_linux -- common/autotest_common.sh@975 -- # wait 1364573 00:32:47.387 20:02:04 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1364437 00:32:47.387 20:02:04 keyring_linux -- common/autotest_common.sh@951 -- # '[' -z 1364437 ']' 00:32:47.387 20:02:04 keyring_linux -- common/autotest_common.sh@955 -- # kill -0 1364437 00:32:47.387 20:02:04 keyring_linux -- common/autotest_common.sh@956 -- # uname 00:32:47.387 20:02:04 keyring_linux -- common/autotest_common.sh@956 -- # '[' Linux = Linux ']' 00:32:47.387 20:02:04 keyring_linux -- common/autotest_common.sh@957 -- # ps --no-headers -o comm= 1364437 00:32:47.387 20:02:04 keyring_linux -- common/autotest_common.sh@957 -- # process_name=reactor_0 00:32:47.387 20:02:04 keyring_linux -- common/autotest_common.sh@961 -- # '[' reactor_0 = sudo ']' 00:32:47.387 20:02:04 keyring_linux -- common/autotest_common.sh@969 -- # echo 'killing process with pid 1364437' 00:32:47.387 killing process with pid 1364437 00:32:47.387 20:02:04 keyring_linux -- common/autotest_common.sh@970 -- # kill 1364437 00:32:47.387 20:02:04 keyring_linux -- common/autotest_common.sh@975 -- # wait 1364437 00:32:47.958 00:32:47.958 real 0m5.523s 00:32:47.958 user 0m9.971s 00:32:47.958 sys 0m1.764s 00:32:47.959 20:02:05 keyring_linux -- common/autotest_common.sh@1127 -- # xtrace_disable 00:32:47.959 20:02:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:47.959 ************************************ 00:32:47.959 END TEST keyring_linux 00:32:47.959 ************************************ 00:32:47.959 20:02:05 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:47.959 20:02:05 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:47.959 20:02:05 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:47.959 20:02:05 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:47.959 20:02:05 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:47.959 20:02:05 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:47.959 20:02:05 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:47.959 20:02:05 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:47.959 20:02:05 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:47.959 20:02:05 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:47.959 20:02:05 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:47.959 20:02:05 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:47.959 20:02:05 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:47.959 20:02:05 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:47.959 20:02:05 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:47.959 20:02:05 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:47.959 20:02:05 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:47.959 20:02:05 -- common/autotest_common.sh@725 -- # xtrace_disable 00:32:47.959 20:02:05 -- common/autotest_common.sh@10 -- # set +x 00:32:47.959 20:02:05 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:47.959 20:02:05 -- common/autotest_common.sh@1393 -- # local autotest_es=0 00:32:47.959 20:02:05 -- common/autotest_common.sh@1394 -- # xtrace_disable 00:32:47.959 20:02:05 -- common/autotest_common.sh@10 -- # set +x 00:32:49.858 INFO: APP EXITING 00:32:49.858 INFO: killing all VMs 00:32:49.858 INFO: killing vhost app 00:32:49.858 INFO: EXIT DONE 00:32:50.424 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:50.424 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:50.682 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:50.682 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:50.682 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:50.682 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:50.682 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:50.682 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:50.682 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:50.682 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:50.682 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:50.682 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:50.682 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:50.682 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:50.682 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:50.682 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:50.682 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:52.058 Cleaning 00:32:52.058 Removing: /var/run/dpdk/spdk0/config 00:32:52.058 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:52.058 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:52.058 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:52.058 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:52.058 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:52.058 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:52.058 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:52.058 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:52.058 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:52.058 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:52.058 Removing: /var/run/dpdk/spdk1/config 00:32:52.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:52.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:52.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:52.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:52.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:52.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:52.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:52.058 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:52.058 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:52.058 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:52.058 Removing: /var/run/dpdk/spdk2/config 00:32:52.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:52.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:52.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:52.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:52.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:52.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:52.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:52.058 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:52.058 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:52.058 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:52.058 Removing: /var/run/dpdk/spdk3/config 00:32:52.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:52.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:52.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:52.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:52.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:52.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:52.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:52.058 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:52.058 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:52.058 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:52.058 Removing: /var/run/dpdk/spdk4/config 00:32:52.058 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:52.058 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:52.058 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:52.058 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:52.058 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:52.058 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:52.058 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:52.058 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:52.058 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:52.058 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:52.058 Removing: /dev/shm/bdev_svc_trace.1 00:32:52.058 Removing: /dev/shm/nvmf_trace.0 00:32:52.058 Removing: /dev/shm/spdk_tgt_trace.pid1048563 00:32:52.058 Removing: /var/run/dpdk/spdk0 00:32:52.058 Removing: /var/run/dpdk/spdk1 00:32:52.058 Removing: /var/run/dpdk/spdk2 00:32:52.058 Removing: /var/run/dpdk/spdk3 00:32:52.058 Removing: /var/run/dpdk/spdk4 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1046887 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1047634 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1048563 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1048999 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1049689 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1049837 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1050549 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1050561 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1050803 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1052025 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1053039 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1053245 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1053535 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1053745 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1053943 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1054205 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1054373 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1054556 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1054832 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1057221 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1057383 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1057563 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1057687 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1057997 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1058121 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1058434 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1058555 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1058730 00:32:52.058 Removing: /var/run/dpdk/spdk_pid1058867 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1059030 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1059158 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1059527 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1059686 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1059885 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1060055 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1060194 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1060278 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1060543 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1060698 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1060893 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1061129 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1061292 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1061522 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1061723 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1061876 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1062126 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1062311 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1062470 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1062741 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1062905 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1063056 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1063335 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1063491 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1063653 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1063929 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1064091 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1064252 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1064439 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1064643 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1066719 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1069330 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1076854 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1077262 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1079772 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1080053 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1082560 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1086272 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1088459 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1094864 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1100073 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1101275 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1102013 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1113034 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1115314 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1141424 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1144711 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1148568 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1153003 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1153006 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1153662 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1154200 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1154852 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1155256 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1155258 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1155520 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1155531 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1155623 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1156195 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1156848 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1157507 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1157907 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1157910 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1158056 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1159066 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1159787 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1165113 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1190466 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1193249 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1194436 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1195751 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1195890 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1196024 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1196052 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1196610 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1197925 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1198657 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1198980 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1201330 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1201796 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1202319 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1204836 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1208235 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1208236 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1208237 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1210461 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1215189 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1217953 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1221717 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1222799 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1223895 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1226470 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1228839 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1233173 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1233178 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1236082 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1236216 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1236359 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1236622 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1236628 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1239628 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1240341 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1243051 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1244981 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1248522 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1251831 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1258061 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1262416 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1262511 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1274847 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1275493 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1276169 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1276693 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1277255 00:32:52.317 Removing: /var/run/dpdk/spdk_pid1277688 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1278098 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1278502 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1281126 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1281268 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1285064 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1285230 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1288582 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1291062 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1297867 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1298387 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1300887 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1301160 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1303652 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1307333 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1309989 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1316341 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1321522 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1322707 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1323366 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1333639 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1335886 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1337484 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1342398 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1342518 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1345413 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1347424 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1348824 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1349567 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1350987 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1351799 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1357139 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1357530 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1357918 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1359356 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1359759 00:32:52.318 Removing: /var/run/dpdk/spdk_pid1360155 00:32:52.575 Removing: /var/run/dpdk/spdk_pid1362472 00:32:52.575 Removing: /var/run/dpdk/spdk_pid1362488 00:32:52.575 Removing: /var/run/dpdk/spdk_pid1363956 00:32:52.575 Removing: /var/run/dpdk/spdk_pid1364437 00:32:52.575 Removing: /var/run/dpdk/spdk_pid1364573 00:32:52.575 Clean 00:32:52.575 20:02:09 -- common/autotest_common.sh@1452 -- # return 0 00:32:52.575 20:02:09 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:52.575 20:02:09 -- common/autotest_common.sh@731 -- # xtrace_disable 00:32:52.575 20:02:09 -- common/autotest_common.sh@10 -- # set +x 00:32:52.575 20:02:09 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:52.575 20:02:09 -- common/autotest_common.sh@731 -- # xtrace_disable 00:32:52.575 20:02:09 -- common/autotest_common.sh@10 -- # set +x 00:32:52.575 20:02:09 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:52.575 20:02:09 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:52.575 20:02:09 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:52.575 20:02:09 -- spdk/autotest.sh@391 -- # hash lcov 00:32:52.575 20:02:09 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:52.575 20:02:09 -- spdk/autotest.sh@393 -- # hostname 00:32:52.575 20:02:09 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:52.832 geninfo: WARNING: invalid characters removed from testname! 00:33:24.897 20:02:37 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:24.897 20:02:41 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:27.429 20:02:44 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:30.747 20:02:47 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:33.279 20:02:50 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:36.567 20:02:53 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:39.099 20:02:56 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:39.099 20:02:56 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.099 20:02:56 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:39.099 20:02:56 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.099 20:02:56 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.099 20:02:56 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.099 20:02:56 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.099 20:02:56 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.099 20:02:56 -- paths/export.sh@5 -- $ export PATH 00:33:39.099 20:02:56 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.099 20:02:56 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:39.099 20:02:56 -- common/autobuild_common.sh@447 -- $ date +%s 00:33:39.099 20:02:56 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721844176.XXXXXX 00:33:39.099 20:02:56 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721844176.17bCju 00:33:39.099 20:02:56 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:33:39.099 20:02:56 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:33:39.099 20:02:56 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:39.099 20:02:56 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:39.099 20:02:56 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:39.099 20:02:56 -- common/autobuild_common.sh@463 -- $ get_config_params 00:33:39.099 20:02:56 -- common/autotest_common.sh@399 -- $ xtrace_disable 00:33:39.099 20:02:56 -- common/autotest_common.sh@10 -- $ set +x 00:33:39.099 20:02:56 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:39.099 20:02:56 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:33:39.099 20:02:56 -- pm/common@17 -- $ local monitor 00:33:39.099 20:02:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:39.099 20:02:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:39.099 20:02:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:39.099 20:02:56 -- pm/common@21 -- $ date +%s 00:33:39.099 20:02:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:39.099 20:02:56 -- pm/common@21 -- $ date +%s 00:33:39.099 20:02:56 -- pm/common@25 -- $ sleep 1 00:33:39.099 20:02:56 -- pm/common@21 -- $ date +%s 00:33:39.100 20:02:56 -- pm/common@21 -- $ date +%s 00:33:39.100 20:02:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721844176 00:33:39.100 20:02:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721844176 00:33:39.100 20:02:56 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721844176 00:33:39.100 20:02:56 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721844176 00:33:39.100 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721844176_collect-vmstat.pm.log 00:33:39.100 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721844176_collect-cpu-load.pm.log 00:33:39.100 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721844176_collect-cpu-temp.pm.log 00:33:39.100 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721844176_collect-bmc-pm.bmc.pm.log 00:33:40.034 20:02:57 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:33:40.034 20:02:57 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:33:40.034 20:02:57 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:40.034 20:02:57 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:40.034 20:02:57 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:40.034 20:02:57 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:40.034 20:02:57 -- common/autotest_common.sh@737 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:40.034 20:02:57 -- common/autotest_common.sh@738 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:40.034 20:02:57 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:40.034 20:02:57 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:40.034 20:02:57 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:40.034 20:02:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:40.034 20:02:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:40.034 20:02:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:40.034 20:02:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:40.034 20:02:57 -- pm/common@44 -- $ pid=1375152 00:33:40.034 20:02:57 -- pm/common@50 -- $ kill -TERM 1375152 00:33:40.034 20:02:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:40.034 20:02:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:40.034 20:02:57 -- pm/common@44 -- $ pid=1375154 00:33:40.034 20:02:57 -- pm/common@50 -- $ kill -TERM 1375154 00:33:40.034 20:02:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:40.034 20:02:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:40.034 20:02:57 -- pm/common@44 -- $ pid=1375156 00:33:40.034 20:02:57 -- pm/common@50 -- $ kill -TERM 1375156 00:33:40.034 20:02:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:40.034 20:02:57 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:40.034 20:02:57 -- pm/common@44 -- $ pid=1375186 00:33:40.034 20:02:57 -- pm/common@50 -- $ sudo -E kill -TERM 1375186 00:33:40.034 + [[ -n 962921 ]] 00:33:40.034 + sudo kill 962921 00:33:40.044 [Pipeline] } 00:33:40.063 [Pipeline] // stage 00:33:40.069 [Pipeline] } 00:33:40.085 [Pipeline] // timeout 00:33:40.091 [Pipeline] } 00:33:40.107 [Pipeline] // catchError 00:33:40.121 [Pipeline] } 00:33:40.138 [Pipeline] // wrap 00:33:40.145 [Pipeline] } 00:33:40.159 [Pipeline] // catchError 00:33:40.165 [Pipeline] stage 00:33:40.167 [Pipeline] { (Epilogue) 00:33:40.177 [Pipeline] catchError 00:33:40.178 [Pipeline] { 00:33:40.190 [Pipeline] echo 00:33:40.192 Cleanup processes 00:33:40.196 [Pipeline] sh 00:33:40.476 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:40.476 1375289 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:40.476 1375417 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:40.495 [Pipeline] sh 00:33:40.781 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:40.781 ++ grep -v 'sudo pgrep' 00:33:40.781 ++ awk '{print $1}' 00:33:40.781 + sudo kill -9 1375289 00:33:40.797 [Pipeline] sh 00:33:41.086 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:49.243 [Pipeline] sh 00:33:49.558 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:49.558 Artifacts sizes are good 00:33:49.573 [Pipeline] archiveArtifacts 00:33:49.581 Archiving artifacts 00:33:49.790 [Pipeline] sh 00:33:50.072 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:50.086 [Pipeline] cleanWs 00:33:50.095 [WS-CLEANUP] Deleting project workspace... 00:33:50.095 [WS-CLEANUP] Deferred wipeout is used... 00:33:50.102 [WS-CLEANUP] done 00:33:50.104 [Pipeline] } 00:33:50.120 [Pipeline] // catchError 00:33:50.133 [Pipeline] sh 00:33:50.412 + logger -p user.info -t JENKINS-CI 00:33:50.420 [Pipeline] } 00:33:50.437 [Pipeline] // stage 00:33:50.443 [Pipeline] } 00:33:50.460 [Pipeline] // node 00:33:50.466 [Pipeline] End of Pipeline 00:33:50.503 Finished: SUCCESS